1 Introduction

Organizations must exist within and compete in a turbulent environment, balancing expenses against income. This paper, which presents an extension to the March (1991) Mutual Learning Model, examines the impact of several organizational hiring strategies. These strategies differ in deliberative power and in (projected) expense, on firm performance. As with the work this paper extends, it is a high-level model, allowing comparison across stylized organizations. March (1991) shows that turnover can allow an organization to maintain performance in a changing marketplace, as fresh minds bring new insights. Further, he states (pp. 80–81):

The positive effects of moderate turnover depend, of course, on the rules for selecting new recruits. In the present case, recruitment is not affected by the (organizational) code. Replacing departing individuals with recruits closer to the current organizational code would significantly reduce the efficiency of turnover as a source of exploration.

This suggests, on an organizational level, a hiring strategy with no deliberation, where the first candidate presented is immediately hired. Although perhaps an optimal strategy in March’s model (and optimal on a cost basis), we know that people, and the organizations they belong to, find it difficult to simply hire the first candidate to come along. We suggest that in many areas what makes a perfect job candidate is not well-defined. Because the evaluation of a candidate is relatively ambiguous, many decision-makers (hiring committee members, in this case) will tend to use personal heuristics. One common heuristic, operating in many social contexts, is that of a similarity preference, also known as homophily (McPherson and Smith-Lovin 1987). In this context, it suggests that the hiring member does not have an objective set of criteria but uses themselves as a yard-stick. Although the views of committee members are likely to be highly correlated with March’s organizational code in this model, each committee member is an individual with their own view on the environment. We implement the individual selection mechanism as a stochastic hiring function working not on March’s construct, the organizational code, but rather on the views of the specific individual agents that form the simulated hiring committee. The stochastic hiring function extends Morgan, Morgan, and Ritter’s (2010) work modeling participation in goal oriented groups. The hiring committee also has one of several hiring strategies, which differ in cost and deliberative capability of the organization as a whole. Candidate selection is thus stochastic but influenced by the individuals making the hiring decision and by the larger organization’s selection strategy.

Morgan et al. (2010) discuss seven factors that affect the probability of taking beneficial and hostile actions towards a third party. We first list, but will later discus and define, the seven factors identified in that paper. When referring to these factors in the body of the text, these factors will be italicized. The seven factors are:

  • Group size

  • Group composition

  • Social distance

  • Spatial distance

  • Mutual support and surveillance

  • Presence or absence of legitimate authority figures

  • Task attractiveness.

The goal of Morgan and his colleague’s work was to present some first steps towards a general social reflexivity mechanism—one that would work even for light-weight agent simulations. Although they demonstrated their work by modeling it in the small-group combat domain, it was a goal of that work that it could be applied broadly, and one of the goals of this paper is to demonstrate the approach in an entirely different setting.

From the seven factors presented above, we focus on the impact of group size, group composition, social distance, and mutual support and surveillance. More precise and technical definitions will follow, but we first provide a brief summary of how these terms are used in this paper. Group size is defined as the size of the hiring committee. Group composition is an abstraction of the diversity of the committee, relative to the total diversity available. Social distance is the perceived difference between the job candidate and each committee member—it is computed separately for each dyad. Mutual support and surveillance is an abstraction of the level of urgency in the group to hire similar individuals to themselves, and this urgency, or pressure, is based on the current diversity of the hiring committee. Mutual support and surveillance is, as modeled in this work, essentially a derivative of the group’s inherent diversity. We keep the factor distinct because in future work, it is probable that organizations may be characterized by various behaviors that indicate the presence of support and surveillance mechanisms and thus moderate the social pressure to conform.

This work does not deal with the other factors identified by Morgan et al. (2010) for the following reasons. We can assume that spatial distance is equal or similar for the vast majority of candidates, although it does present some interesting implications for those candidates who choose to teleconference for their interviews. Furthermore, the presence of legitimate leaders and a constant value for task attractiveness seems implicit to a hiring process, allowing us to hold them constant in this initial work. Presented with a context where these two variables would usefully vary, the integration of those factors seems straightforward.

We also acknowledge that individuals leave firms for many reasons, some of these reasons include:

  • because their goals don’t fit the organization (Schneider 1987)

  • because they are not well embedded in their job (Mitchell et al. 2001)

  • because they do not have influence on decisions central to their work (Parker 1993).

  • family pressures (Lee and Maurer 1999)

  • and mismatched expectations (Branham 2005, pp. 31–36).

Although we are aware of these factors, we presume a collection of exogenous factors, such as those listed above. We represent these factors by using a random value to determine which agents leave at each time-point. The selection of members for the hiring committee is also random, while the number of members in the hiring committee is fixed—although we see this as an interesting point for exploration.

2 Related work

In this section, we first describe in more detail March’s simulation. We follow this discussion with a brief literature review of the factors used in the hiring function.

2.1 March’s Mutual Learning Model

March (1991) predicts how organizational knowledge develops from the aggregate of individuals and how this organization may perform in a turbulent environment. His simulation model remains influential. Consequently, it is important to note early the similarities and differences between March’s model and the extension developed in this work.

As in March’s model, the model extension posits an external environment (which March calls ‘reality’), which changes over time due to un-modeled exogenous factors (the accumulated effect of which can be thought as turbulence). This turbulence can be thought of as both changes in the local market-place in which the firm operates, as well as new directional changes from top management to compensate for those changes. This environment, however, has some inertia, and thus the conditional probabilities of changing the state favor remaining at the current value. The organization does not interact or learn from the environment directly, but instead from high-performing individuals.

We also posit, as March does, an organizational ‘code’ that represents the firm’s current understanding of its environment. This code is developed over time based on the adoption of the values and knowledge of high performers and is not a simple mathematical summary of these high performer’s views. Although our implementation of the model allows for modeling perception as an error-prone process, exploration of possible perceptual models has been reserved for future work.

Finally, in his open system extension (showing both environmental turbulence and turnover), March uses a random function to identify both individuals to be removed and what knowledge newly hired individuals possess. This model extension still uses a random function to identify individuals who depart. However, the model extension uses hiring strategies and biased committee members to determine who is hired. Individuals are distinct based on their views of the environment. The overall process remains stochastic, but each hiring strategy varies in the degree of allowed variation available to the hiring committee. We discuss in the next section the four factors we focus on for this work. Our operationalization of these factors is discussed in Sect. 3. The hiring strategies are discussed in Sect. 4.2.

2.2 Factors that contribute to similarity bias

Morgan et al. (2010) define seven variables, introduced above, that contribute to the probability of taking a beneficial or negative action towards a third-party. They demonstrated the impact of three of these variables, group size, spatial distance, and presence or absence of leaders, in a simulation of ground combat that realistically replicated some of the social dynamics found in war. Alternatively, this work focuses on the following variables: (1) group size, (2) group composition, (3) social distance, and (4) mutual support and surveillance. We summarize each of these factors, and how they impact the hiring process.

Group size influences individual behavior in a variety of ways. Members of larger groups tend to be able to more easily disassociate themselves from the results of collective action (Grossman 1995). Large groups, despite having the capacity to do so, are less likely to help needy outsiders (Latane and Darley 1970). Larger groups, when compared to dyads, tend to allow more confrontational language and are less concerned about actor participation (Slater 1958).

Consequently, our model associates increases in group size with decreases in the likelihood that any particular committee member will recommend a specific candidate, if all other factors are held equal.

The group composition, the individuals that make up the group, also influences the ability of the group to take collective beneficial or negative action. Drawing on the social integration literature (Harrison et al. 1998), we distinguish between surface (superficial or cosmetic differences) and deep-level (differences in attitudes, beliefs, and skills) diversity. Because candidates are attempting to communicate their knowledge, skills, and professional outlook to the hiring committee, we choose to focus on aspects of deep-level diversity. Further, strong group performance tends to correlate more closely to similarities in beliefs than to surface-level characteristics (Terborg et al. 1976).

Our literature review suggests that there is a strong and interesting interaction between group composition and mutual support and surveillance. Groups that tend to be diverse are likely to be more welcoming of further diversity, whereas groups where individuals tend to be very similar in attitudes and beliefs find it difficult to accept candidates who do not have similar characteristics. We consider this a group level trait, similar to group size.

Members of groups enjoy several benefits from participation: identity is provided through group norms (Cialdini et al. 1990); rules define and structure ambiguous situations (Chekroun and Brauer 2002), and help members predict the actions of others (Smith and Mackie 1995). Social support may diminish the effect of stress (Caplan 1974).

Groups also impose costs on their members. Groups encourage uniformity, and the pressure to maintain that uniformity increases both when differences between members are small, and when inclusion into the group is privileged (Dinter 1985; Festinger 1954).

Thus, mutual support and surveillance interacts with group composition. When the group is inherently diverse, there is less pressure to maintain group norms. Candidates who are perceived as similar to the hiring committee member are more likely to be selected by that member, provided all other factors are equal. Further, hiring committees of homogenous individuals are likely to take more time and require the consideration of more candidates if the pool of candidates is itself diverse.

Social distance can be thought of as a continuous scalar, where individuals “just like me” have very low distance scores and individuals who are “not like me” have much larger distance scores. This subscribes to the view advanced by Perloff (1993) and, loosely, to that suggested by Park (1924).

We believe that social distance is the feature of a dyad, in this case, the amount of perceived social distance, determined by similarity of beliefs, attitudes, and knowledge, between the recipient and the observer. Individuals with similar attributes tend to interact (McPherson and Smith-Lovin 1987).

A small social distance contributes to a feeling of connection with the candidate, making it more likely that the committee member will suggest offering employment to that candidate, if other factors are held constant.

3 Implementation of a similarity bias through a stochastic selection function

Based on this literature review, we define a hiring function that incorporates these factors. The overall function is a logit-transform, which has been useful in previous discrete choice models (McFadden 1980). The complete selection function is defined here. This function is a function of functions with each sub-function defined in turn. Values will range between 1 and 0.

Equation 1: The probability that a particular target, t, will be selected by a particular committee member, c.

The probability of a particular actor getting hired is based on the rules of that firm, which will be discussed in Sect. 4.2.

Group Composition is a relative term indicating the amount of differentiation present in the group compared to the maximal amount of possible variation. A group is maximally variable, has a value g c =1, if the entire maximal span of variation is represented in the group (gmax i gmin i =max i min i ) for every feature i. The smoothing term, k, is to avoid the possibility of division by 0, and should be very small. We use this function to identify how diverse a particular hiring committee is at a particular point in time in comparison to all the variation that could be present in the group. Larger committees are likely to be more diverse. Committees in organizations with less social pressure are likely to be more diverse. Committees which use less deliberative hiring strategies should be more diverse.

Equation 2: Group composition, g c , is the amount of variability present in the group, the hiring committee, compared to the maximal amount of variability that could be present across n dimensions.

Our implementation of the mutual support and surveillance term uses the group composition term, g c , defined earlier. Because social pressure is very high when group variability is low, we use an inverse function to define social pressure. Because of the k-smoothing term in the definition of g c , ‘pressure’ is always defined (although potentially very large). The constant m should be specific to the environment in which the equation is applied. We will use the value ‘.25’ in this work; larger values would indicate an environment where more pressure is exerted. Larger committees should have less pressure. Committees more tolerant of diversity should have less pressure. Committees with less deliberative hiring strategies should have less pressure. Committees will tend to experience more pressure over time as the turnover and socialization rates equalize the ‘typical diversity’ present in the group.

Equation 3: Pressure is the inverse of calculated group composition value, g c , mediated by the constant, m.

We represent social distance (d) as a Euclidean distance measure across an arbitrary number of dimensions. Given n environmental features, the committee member, c, compares their own feature (each individual feature is c i ) and for the target, t (the target’s value for each feature is t i ). The square root of the sum of these squares produces the distance between the committee member and the target, d ct .

Equation 4: The social distance between a committee member, c, and a target candidate, t, is a Euclidean distance calculated across n dimensions.

With these functions, we have defined how we have implemented similarity biases through a stochastic selection function. This function indicates the probability of any particular committee member approving of a particular candidate. In Sect. 4, we review the larger simulation and discuss the hiring strategies used.

4 Details of the extended model

We are replicating and then extending March’s simulation. Briefly, we will present the overall process that characterizes March’s model and then describe our extensions to this process.

4.1 Overview of operations for the Mutual Learning Model

March’s model (1991) has these initial properties (pp. 74–75):

Within this system, initial conditions include: a reality m-tuple (m dimensions, each of which has a value of 1 or −1, with independent equal probability); an organizational code m-tuple (m dimensions, each of which is initially 0); and n individual m-tuples (m dimensions, with values equal to 1, 0, or −1, with equal probabilities).

From these starting conditions, the model proceeds as shown in Fig. 1.

Fig. 1
figure 1

Overview of a single simulation turn in the Mutual Learning Mode.

In step 1, Organization learns from individuals, the organization identifies high performers, individuals whose beliefs better reflect the environment (in aggregate) than the organization’s code. The dominant opinion among high performers for each portion of the m-tuple will typically be selected. This process is stochastic, and depends on the level of agreement between high performers. This process is moderated by an “organizational learning effectiveness” variable.

In step 2, individuals learn from the organization, the beliefs of individuals change to reflect the organizational code. For any portion of the organization’s m-tuple whose value is not zero, the individual may change their belief to be in accordance with the organizational code. The probability of them doing this for any portion of the m-tuple is determined by an “effectiveness of socialization” variable. Thus, members of the organization lose their heterogeneity over time.

In step 3, reality changes, the environment’s m-tuple is probabilistically changed due to exogenous turbulence. This process is moderated by a “turbulence” variable.

In step 4, individuals leave the organization, individuals are selected randomly from the organization and removed.

In step 5, organization replaces lost members, new individuals join the organization. In March’s model, new members of the organization are added as necessary. These new member’s beliefs are initialized randomly. The extensions documented in this work are principally to this step, and are discussed in more detail in the following Sect. 4.2.

4.2 Organizational hiring strategies

We extend March’s model by modifying the replacement of lost members of the organization through incorporating a hiring committee using one of several hiring strategies, as suggested by Fig. 2. Each selection method has its own work-flow.

Fig. 2
figure 2

Extensions to the hiring mechanism

The new work involves the use of a hiring committee and various strategies those hiring committees may use to select the agent which fills the vacant position. First, each candidate joins a candidate queue. Second, each candidate is reviewed in turn and the social distance between the candidate and each committee member is calculated. How exactly a candidate is selected is dependent on the selection model. We propose and will evaluate three selection models in this work. These selection models are intended to show varying amounts of deliberation available to the organization. The Immediate Selection method is the most stochastic, whereas the Deliberation strategy narrows down the candidate to very few or a single choice.

In the first model, Immediate Selection, each candidate is presented and then, through a stochastic process, immediately hired or rejected according to their compatibility with the committee members. A candidate who receives approval from the majority of the hiring committee members is hired. Because the committee members may be, due to their unique perspectives, unable to agree on any actor (resulting in effective dead-lock as an organization reviews thousands of applicants), we moderate Equation 4 to incorporate a pressure to come to a consensus as shown in Equation 5. This new term, d m , replaces d ct in Equation 1. The process continues with an infinite number of candidates until a selection is made.

Equation 5: The pressure to hire candidates grows, in this model, as more and more candidates are reviewed. This is only one method of implementing this pressure. The modified distance value, d m , replaces the original d ct value in Equation 1.

In the second model, Deliberation, a pool (defined as a queue but ordering is immaterial) of a hundred candidates is instantiated (with their composition defined stochastically). The committee selects the candidate that has the lowest total social distance to all the committee members. If candidates tie for the lowest social distance, the selection will be determined randomly. The precise number of candidates in the pool is not a firm commitment of the model but we use one hundred (100) to suggest a relatively large deliberative capacity for the organization.

In the third model, Selection with Deliberation, a queue of candidates is defined. Each candidate is processed and a score assigned. The committee makes a hiring decision for each candidate as in the Immediate Selection model; one, but upon making a decision that the candidate is agreeable to the firm, the committee checks the previous ten (10) candidates (including the current selection) and confirms that the candidate is the best choice (based on lowest total social distance to the committee members) of those ten. If ten candidates have not yet been reviewed before the candidate has been selected, the pool includes all candidates reviewed up to that point. The best of the ten (on basis of similarity to the committee) is selected for the position. Thus, a candidate may trigger the final hiring decision, but not be the final target of that decision. Ten is, again, an arbitrary number intended to suggest more deliberative capacity than the immediate selection method but less than that suggested by the one hundred candidates that can be simultaneously reviewed by the Deliberation method.

These selection models are intended to be representative, but not exhaustive, of the possible selection models available. Rules could differ on, for example, what portions of the m-tuple each committee member reviews (all current models have each committee member review the entire space), alternative voting systems, and how the over-time pressure is implemented in Equation 5.

Adding this hiring committee requires some adjustments to the overall simulation turn cycle, as shown in Fig. 3.

Fig. 3
figure 3

Modifications to March’s Mutual Learning Model are in the departure and hiring phase

The modifications to the departure process are relatively trivial; individuals leave the firm at random, as before. Individuals from the hiring committee are not exempt from this process. If a member of the hiring committee leaves, they are replaced at random with a new committee member from the larger organization. We replace members of the committee with a random member of the organization because the organization has no implicit structure. We could imagine preferential rules for selecting replacements, such as longest tenure or highest performers, but we keep the selection process purely random. Future extensions of this model will incorporate an idea of the hiring committee being formed based on the position being filled.

Our additions to the hiring process are more interesting. As before, new individuals are determined randomly (following the process described for initializing the simulation), but these individuals are candidates. Each candidate is reviewed by the hiring committee, and each committee member makes their choice independently of each other, using the selection function defined previously. The aggregate of the members’ individual selections is used to determine whether the candidate is allowed to join the organization as a member.

5 Validation of the extension

Before exploring the impact of hiring strategy methods, we need to ensure that the simulation was developed corrected and accurately replicates the behavior of the Mutual Learning Model. We must first evaluate our extension and compare March’s published results against the new simulation’s operations—our goal is to successfully replicate March’s findings that turnover is an effective mechanism for retaining organizational performance. We then can have reasonable confidence that our implementation works as intended. In this, we are using the model comparison method known as “Docking” (Axtell et al. 1996). We are interested in establishing that the two models show relational similarity—performance is stable when there is turnover with random selection, whereas performance degrades when there is no turn-over.

5.1 Evaluating relational equivalence—experimental design

We compare, as March does, the impact of turnover as a counter-measure to that of environmental turbulence. The experimental variable was the amount of turnover, which was set to either .01 (each person having a 1 % chance of leaving the organization each turn) or 0 (no chance of leaving). The organizational learning effectiveness variable was set to .5 and the effectiveness of socialization variable was set to .5. The “environment turbulence” variable was set to .02 (each portion of the environment M-Tuple had a 2 % chance of changing). Each organization was composed of 50 actors, and there were 30 bits in the environment M-Tuple. Each simulation had 100 turns, and each condition had 200 separate simulations for a total of 400 separate simulation runs. We evaluate firms based on “Code Knowledge”, which measures what percentage of the environment M-Tuple the organization’s knowledge correctly reflects. This design, summarized in Table 1, is as similar as possible to the values used in March (1991) to test the efficacy of turnover.

Table 1 Docking experiment variables and constants

5.2 Establishing relational equivalence—results

This experiment compared the impact of turnover on an organization; our goal for this experiment was to replicate March’s findings (i.e., that turnover is a useful explorative mechanism). Replicating March’s findings allowed us to verify that the system was coded properly, in that the organization without turnover reaches an equilibrium point relatively early. Reaching this point, the organization should no longer change, because all the actors and the organization have identical M-tuples, resulting in consistently decreasing performance until reaching 50 %. At this point, the system’s turn-by-turn performance takes on the characteristics of a random-walk.

As indicated in Fig. 4, we were able to replicate this behavior. On the chart, the compound line shows the average performance of organizations without turnover. The darker solid line shows the average performance of firms with turnover. In both cases, the organization achieves a certain amount of knowledge, well above random chance acquisition. Once the knowledge equilibrium is reached, however, the no-turnover organization begins to stagnate steadily, declining in the face of consistent minor turbulence.

Fig. 4
figure 4

Both models predict that turnover is an effective mechanism for handling an environment with turbulence

From this result, we can establish that the two models share (a) Component Equivalence (i.e., the models contain the same objects) and (b) Relational Equivalence, where the models have similar relationships between these objects. March uses a non-obvious transform converting the code knowledge metric so that the performance dwindles toward 0, rather than fifty (50) percent. Because of this, we do not establish statistical nor numerical equivalence. Because we are not interested in comparing these models “head to head”, relational equivalence is sufficient for our needs.

6 Comparing hiring strategies

In this section, we examine the interplay of committee size, hiring selection method, and organizational profile on organizational code accuracy and applicant review rates.

We see the organizational code accuracy metric as a construct analogous to the firm’s performance and the applicant review rates as a way of examining the costs imposed on the organization to maintain turnover with a basic but plausible model of human bias and organizational selection.

6.1 Comparing hiring strategies, experimental design

This virtual experiment considers the impact of the hiring committee—do hiring committees affect an organization’s performance over time? Committee members were selected randomly from the larger population pool. We used four experimental variables.

The first variable, Hiring Committee, is a binary categorical variable. Firms either use a hiring committee, as discussed previously, or they do not, following March’s original method of simple stochastic replacement. The second variable, Firm Profile, is also categorical; we considered three firm profiles: (1) a firm that values exploration, allowing members of the organization to remain diverse and building knowledge slowly; (2) a firm that is exploitative in nature, where individuals rapidly conform to the organizational code, and the organization establishes opinions early; and (3) a firm with mid-range values, neither fast nor slow to socialize employees or gain organizational knowledge. The third variable, Committee Selection Model, only applied in cases where there was a hiring committee. There were three possible values for this variable, corresponding to the three selection models discussed in the previous section: (1) Immediate Selection, where the committee immediately chooses to hire or reject a candidate; (2) Deliberation, where the committee selects the most compatible member from among a given set; and (3) Selection with Deliberation, where the committee makes a hiring decision but checks that the chosen candidate is the most compatible from a set of recent candidates, the most compatible is selected. The fourth variable, Committee Size, is discrete quantitative but only has two values in the current model, three and seven. All other variables were held constant. There were a total of (Committee (3×3×2) + No Committee (3)) twenty-one combinations; each combination ran for 200 simulation runs, for a total of 4200 simulation runs. This design is summarized in Table 2.

Table 2 Hiring committees and organizational performance variables and constants

We expected that firms with exploitative profiles (firms focusing on conformance) would find it difficult to hire new candidates that fit their established ‘type’. We anticipated this to be true because we believed that the hiring committees for these firms would be less diverse, and thus the social pressure to maintain conformity would be greater than that of the two other profiles. From the selection models we have defined, we would expect that the Immediate Selection model should have results most similar to the baseline results of having no committee as the pressure to select any candidate at all may outweigh the demands of the committee members, while the Deliberative model may produce results most distinct from the baseline. We believed that the smaller committees, which should tend to have more conformity, may tend to review more candidates than the larger committees, given that other factors are held equal.

Our primary performance metric is “Code Knowledge”, which measures what percentage of the environment M-Tuple the organization’s knowledge correctly reflects. We measure code knowledge on scale ranging from 0 to 100, with 100 being perfect performance. All organizations started with a “0” score, because they start with no opinion on any portion of the M-Tuple. Even low-performance organizations trended towards a ‘50’ or higher, as random chance perturbs the environment. We also review and present results related to the number of candidates reviewed over the course of the simulation for the Selection models and their interactions with the firm’s profile (the Deliberation model presents a constant number of candidates, hence that analysis is not applicable). We see the number of candidates that each committee must review for each position as a construct analogous to an important subset of HR costs imposed on that organization.

6.2 Comparing hiring strategies—results

In our second experiment, we examined the impact of hiring committees by comparing firms with and without hiring committees. Further, for those firms with hiring committees, we moderated both the size of the hiring committee and its selection method.

As expected and shown in Fig. 5, firms with different profiles responded differently to the simulation environment used for all experiments. In a more turbulent environment and with more turn-over, exploitative firms (the dashed line) would be expected to do better, as they would be able to quickly integrate new staff. Because of the low-turnover relative to high turbulence, the exploitative firms (the dashed line) do not break the threshold we used for this graph until later, on average, than the other two firms.

Fig. 5
figure 5

In this turbulent environment with relatively little turnover, the Explorative firm does best. This chart compares base-cases—firms without hiring committees

March predicts that hiring individuals based on their similarity to the code (and presumably to a hiring committee of people influenced by the code) would harm the efficacy of turnover as a mechanism for maintaining an organization’s performance (March 1991, p. 81). Yet, we believe that people find it difficult to simply hire by flipping a coin. As such, we developed a plausible model of human bias based on homophily and then placed those biased actors within example organizational structures that vary in deliberative capacity.

All of these models, as shown in Fig. 6, tend to lower performance. Results are normalized against the base-rate mean and peak performance of each firm profile. As March predicted, the more deliberative capacity the organization possesses, the less effective turnover becomes. The pattern is consistent across all firm profiles. Decrements are as much as 5 % to average performance across the entire time course of the simulation.

Fig. 6
figure 6

All hiring strategies tend to lower performance, although the Deliberation Strategy, which allows for optimal selection from among 100 candidates, has the largest decrement on performance. This is, we believe, consistent with March’s predictions

In Fig. 7, we see the impact of the pressure to conform over the simulation’s time course. The “effectiveness of socialization” variable influenced the diversity of the hiring committee—this was an inverse relationship. When the committee was highly diverse (unlikely in organizations that prioritize socialization), there was relatively little pressure to hire extremely similar candidates—this can be seen in the solid single line below. When the committee was very similar, it became very difficult to find acceptable candidates out of the diverse candidate pool. Thus, the committee’s diversity, and indirectly the stress the organization put on socialization, strongly impacted the number of candidates reviewed before finding an acceptable person for each position.

Fig. 7
figure 7

Firms that stressed socialization reviewed many more applicants than those that did not

We also wanted to examine the effect of committee size on both organizational performance and on the number of candidates reviewed. We believed that the amount of diversity evident in a larger group would decrease the number of applicants that were needed to be reviewed, despite the additional requirement of getting more votes. Because fewer applicants would be reviewed, we suspected that this would have a positive, albeit marginal, trend on performance. It is clear from Fig. 8 that the firm profile has a significant interaction with the effect of committee size on applicants reviewed. Firms with explorative profiles will tend to possess a relatively diverse work-force, and the larger committee further increases the diversity of that committee, making it easier to hire candidates. In firms with more focus on conformance, a larger committee may not be much more diverse than the smaller committee, causing the number of additional votes necessary to become a hurdle and marginally increasing the number of candidates that must be reviewed. The effect on the firm’s performance, as estimated by code knowledge, is a complicated interaction based on both firm profile and hiring strategy (as shown in Fig. 6). Each candidate that must be reviewed is some cost exacted on the organization.

Fig. 8
figure 8

The average number of candidates reviewed relative to the base average. Both the committee size and the firm’s profile is a significant interaction factor with the number of applicants reviewed

7 Discussion

This project is an initial attempt to extend March’s powerful model by incorporating a theory of selection into the hiring process based on Morgan et al.’s work on participation (2010). This model makes the assumption that committee members, when hiring candidates, often use themselves as a guide to appropriate behavior, a type of homophily preference. It further suggests that organizations often use committees of multiple individuals to determine the final candidate selection. The hiring strategies used in this model are meant to be suggestive of realistic hiring practices, but certainly do not show all of the variation present in hiring committees.

As with the original March model, this is a type of stylized model which necessitates evaluation of stylized organizations. These organizations have no inherent structure, and the organization learns only from new blood entering the ossifying organization. Both of these assumptions should be relaxed in future work that leverages this model. Future models may be able to explore both stylized organizations, similar to those shown here, and applied problems based on instancing the simulation from an existing organization.

But even as a stylized general model, this work has shown some interesting behaviors.

Just as Morgan et al. (2010) showed that the decision to participate in combat was significantly affected by proximity to comrades and enemies—this simulation showed that social incompatibility among members of the hiring committee could deadlock progress. Given an n-dimensional space of reasonable size and a relatively small selection committee, the committee can rapidly find it impossible to agree to any particular candidate—each candidate receiving a single vote from the individual they most resemble. This is one reason we were forced to implement the pressure mechanism shown in Equation 5. Without incorporating selection pressure, committees reviewed tens of thousands of candidates per position in exploitative and mid-range firms.

The interaction between committee size and firm profile suggests that organizations should consider the size of their hiring groups carefully. The extra time required for managing larger committees may not be useful in groups that value candidates which stress socialization, but it may be beneficial in groups less intent on socialization.

There are other limitations of this work we hope to address in the future, particularly:

One: Individuals and organizations must perceive the environment. This process is error-prone and the errors are often interesting and important. The software framework is designed to support perception (as a stochastic process for apprehending environment) but further work must be done to answer some questions relating to perception. Should individuals and organizations be required to perceive rather than simply “know” themselves? Should the error-rates for various kinds of perception be different? What should inform these error rates; and what distribution should the probability model use? Is it possible that error tends to make people believe others are more different than they are, and thus help them make better hiring decisions?

Two: individuals learn, not just from the organizational code, but from each other; perpetuating knowledge both correct and incorrect over time. March (1991) abstracts this important process through his use of the organizational code construct; but in future models, we hope to include individual socialization as well as organizational socialization

Three: hiring committees are complicated. In large organizations, members of hiring committees represent various necessary roles critical to the organization. Each member is expected to weigh in on a specific portion of the applicant’s credentials and their fit to the organization. In future work, it would be interesting to model an existing organization and its process of hiring, to determine if various structures are more or less capable of neutralizing the challenges imposed by member bias.

Four: committee members are meta-cognitive. Members of hiring committees are aware and may attempt to control for their own biases towards similarity. Further, they are aware that their own performance will be evaluated by outside observers. Future work could involve rewards and penalties for hiring decisions using a reinforcement learning system. This may be a more effective and principled method for incorporating the opportunity cost mechanism.

Acknowledging these various limitations, we reiterate the primary contribution of this paper: a docked extension of the March Mutual Learning Model that expands upon the important role that hiring plays in many organizations. We have examined both hiring strategies that resemble actual organizational hiring strategies and the interplay of homophily bias in the execution of these strategies. This work suggests that organizational models with turnover should incorporate more nuanced models of hiring.