Elsevier

Computers in Human Behavior

Volume 50, September 2015, Pages 66-75
Computers in Human Behavior

Interacting with bots online: Users’ reactions to actions of automated programs in Wikipedia

https://doi.org/10.1016/j.chb.2015.03.078Get rights and content

Highlights

  • Reactions of users to bots can be extremely varied.

  • Some patterns of bot behavior elicit similar types of reactions from users.

  • Bots are surprisingly well accepted by the users.

  • Policing bots elicit more polarized (negative but also positive) reactions.

  • Bot–user interactions point to the emergence of a sociotechnical system.

Abstract

With the drastic rise of social media, large-scale collaborative online projects such as Wikipedia are now dealing with incredible large amount of data. This growth forces the community to provide tremendous efforts in order to maintain the accuracy and structure of the database. To deal with such amounts of data, Wikipedia users have developed automated programs – bots – to help them to do some of the maintenance tasks. However, it is unclear how human users react to the actions of these bots. Based on a corpus of 6528 interventions (2353 different discussions) of users on talk pages of 50 bots active on English-language Wikipedia pages between January 4, 2012 and January 2, 2013, we analyzed the reactions of users depending of the characteristics of the bots’ actions. Bots activity was strongly associated with the functioning Wikipedia internal community. Bots whose activity was mostly related to the work of other users (e.g. high degree of constraint or visibility) elicited more responses. By combining the different characteristics of the bots, we were able to define two opposite “ideotypes” of bots with distinct behavior: “servant bots” which mainly do repetitive and laborious work instead of human users; and “policing bots” proactively enforcing Wikipedia’s guidelines and norms, which elicited more polarized responses from users (either negative or positive rather than neutral). Our results demonstrated a surprisingly high level of acceptance of bots, modulated by differential reactions in function of the actual behavior of the bots.

Introduction

The drastic increase of the size of social networks and virtual communities over the last decades has deeply affected the way humans interact online (Dunbar, 2012). Indeed, the global propensity of users for collaboration – emerging as a need to maintain efficacy – increases as the size of the virtual community increases (Yang et al., 2013). While optimizing strategies of communication within the community can help maintain social density to an extent (Guitton, 2015), ultimately, the growth of a virtual community forces its members to reduce their individual capacity of action to favor the emergence of managerial operative frameworks (Yang et al., 2013).

One of the most widely used solution that online communities have found to solve this issue was the development of specific tools which could provide assistance of human users – whether standard users or system administrators. These tools often take the form of automated programs able to operate directly in the online media, referred as “bots”. Internet users have long been exposed to bots, whether they were used for entertainment in IRC, supporting the community by doing administrative work, or gathering and providing particular sets of data to users. While conversational bots are easily identified as such and are mainly used for mundane purposes, bots having explicit duties have an important role in structuring both the information available for the community, and the community itself. Indeed, their programmed scripts make them act on a systematic basis, whether the action of the bot involves material added by the bot’s owner or by other users, and, more importantly, whether the human users are aware of not of the action of the bot. Therefore, due to their function and despite the fact that bots obviously do not have any form of consciousness, they strongly contribute to automate and enforce rules. In a limited community, the actions of bots would be easily identified, and the bot’s owner would be considered accountable for the bot’s actions. Any malfunction of the bot would immediately trigger a human reaction; the bots being then perceived as mere tools.

The situation however gets considerably more complex for larger communities. Indeed, with the considerable increase of social media and online collaborative communities and projects, the interactions between human users and bots are becoming more and more common. More than a mere increase of the magnitude of interactions, the actions of the bots over the online material increases exponentially, and in any case considerably faster than the actions of human users, due to the automated characteristics of bots’ actions. Therefore, modern bots are pushing the interactions to a brand new level. In this emerging reality, the question of how human users perceive bots, and perceive the interactions with them, becomes central. More specifically, while users tend to always perceive humans as superior than bots in computer-mediated communication (Aharoni and Fridlund, 2007, Edwards et al., 2014, Lortie and Guitton, 2011, Mowbray, 2014), the questions remains of what kind of reactions and perceptions could be expected from human users confronted with massive actions of bots, and with the need to cooperate with them more tightly.

Among the different emerging communities making a heavy use of bots, Wikipedia represents a model of choice. Wikipedia is a community-based collaborative encyclopedia, in which users can implement knowledge in a massively shared process (Fallis, 2008, Giles, 2005, Loveland and Reagle, 2013, Zlatic et al., 2006). With the constant implementation of its content, Wikipedia is dealing with incredible large amount of data, which forces the Wikipedia community to provide tremendous efforts in order to maintain the accuracy and structure of the database (Niederer & van Dijck, 2010). To deal with such amounts of data, Wikipedia users have developed bots to help them to do some of the maintenance tasks (Geiger, 2011, Halfaker and Riedl, 2012, Müller-Birn et al., 2013, Niederer and van Dijck, 2010). With approximately 400 bots in 2010 (Niederer & van Dijck, 2010), 700 bots by September 2011, and a total of 872 bots were referenced on English Wikipedia pages on January 2, 2013, the number of bots on Wikipedia has been constantly growing. While bots were responsible for only 3–5% of the edits of the English version of Wikipedia in 2005–06, they accounted for 16.33% in 2009 (Geiger, 2009), and above 20% of the total number of editions for English Wikipedia pages in the year 2012, as compiled using Wikipedia monthly statistics. Furthermore, two thirds of new Wikipedia users receive their first message from a bot or a semi-automated algorithm (Geiger, 2014), and therefore, bots impact the retention rates of new users one way or another.

In the context of the study of human–bots interactions in virtual spaces, Wikipedia bots represent an excellent model for several reasons. First, the combination of the size of the Wikipedia community, of the sheer number of Wikipedia bots, of the size of the corpus of texts which can be edited either by humans or by bots, and of the number of actual editions performed daily on Wikipedia results in a considerable amount of potential interactions. Second, and maybe more important, is the extremely prominent social factor underlying the process of knowledge accumulation, and therefore, the implications of human–bot interactions. Indeed, the social dimension of Wikipedia is still extremely prominent, since implementation of knowledge heavily relies on community-based decision processes and “consensus” (Jahnke, 2010, Viégas et al., 2007). In other words, the decision processes in Wikipedia are in fact based on social-related factors (Black et al., 2011, Chang and Chuang, 2011). However, while clearly perceived as not equivalent in terms of social status, the Wikipedia bots nonetheless have considerably more power in terms of their action over the content of Wikipedia (power of alteration and control over the content) than a standard registered, non-administrator user (Niederer & van Dijck, 2010). Thus, the actions of bots could bypass the conventional decision process. In this context, the massively increasing actions of bots over Wikipedia content cannot be unnoticed by the community members and Wikipedia users, making Wikipedia a model of choice to study the interactions between bots and humans in an ecological large-scale setting, as well as the perception of the bots by the users.

Based on the analysis of comments on the talk pages of bots active in English Wikipedia, this study aims to characterize the interactions between users and bots. By aggregating different individual characteristics of bots, we were able to define opposite, yet complementary global profiles of bot behavior, and to demonstrate differential reactions of humans in function of these behavioral bot phenotypes. Our results lead to a better understanding of the way bots are perceived by the community, and demonstrate the importance of the reminiscence of human control over bots in their acceptation by others.

Section snippets

General protocol

The perception of bots by human users was evaluated on English Wikipedia active bots, via the analysis of the posts left on the bots’ talk pages by users. Given that each individual language version of Wikipedia is supported by a slightly different community, the exact set of rules and conduct may slightly vary across languages. Hence, we focused on the mainstream and largest Wikipedia community, the English-language one. Bots were defined as automated software agents that perform

Characteristics of the bots population

A total of 872 bots were referenced on English Wikipedia pages on January 2, 2013. Between January 4, 2012 and January 2, 2013, 170 bots have been active, for a total of 8,985,353 edits (52,855 ± 10,437 edits per bot, median: 10,037, maximum: 975,675, minimum: 20). The distribution of their contributions follows a power law (R2 = 0.759, p < .001, Fig. 1). A similar distribution was evidenced in our final sample of 50 bots for the total number of edits (R2 = 0.837, p < .001 with an average of 118,360 ± 

Wikipedia bots as a model of perception of human/bot interactions

While Wikipedia bots have already been the target of research interest (Geiger, 2011, Halfaker and Riedl, 2012, Müller-Birn et al., 2013, Niederer and van Dijck, 2010), the focus so far has almost always been on characterizing the putative role of the bots on structural levels – what is referred to by some authors as sociotechnical norms or system (Halfaker and Riedl, 2012, Niederer and van Dijck, 2010). The objective of the present study was fully different: we focused here on investigating

Acknowledgments

MJG holds a Career Grant from the ‘‘Fonds de Recherche du Québec – Santé’’ (FRQS). This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC – Grant Number 371644).

References (28)

  • Geiger, R. S. (2009), The social roles of bots and assisted editing programs. In Proceedings of the 5th international...
  • R.S. Geiger

    Bots, bespoke code, and the materiality of software platforms

    Information, Communication & Society

    (2014)
  • R.S. Geiger

    The lives of bots

  • R.S. Geiger et al.

    The work of sustaining order in Wikipedia: The banning of a vandal

    (2010)
  • Cited by (0)

    View full text