Keywords

1 Introduction

Crowd work is based on crowdsourcing and describes a digital kind of employment where crowd workers - individuals of the crowd [1] - are rewarded monetarily [1]. “Crowd work offers remarkable opportunities for improving productivity, social mobility, and the global economy by engaging a geographically distributed workforce to complete complex tasks on demand and at scale” [2]. Therefore, crowd work platforms have evolved and are used as intermediaries between worker and employer. In addition, they are the essential connecting work environment and tool to orchestrate collaboration between crowdsourcer and crowd worker and generally among individuals of the crowd. Platforms make it possible to efficiently assign individual stand-alone tasks to dispersed workers. However, also larger projects can be managed through a platform by splitting more complex tasks and assigning subtasks to workers or by assigning larger tasks to a crowd for collaborative solving. As collaboration between heterogeneous participants can lead to better results in solving complex tasks [3,4,5,6], crowd work platforms may benefit from explicitly fostering and facilitating collaboration processes. However, collaboration requires a closely coordinated joint work of several workers on the same artefact [7], e.g. a design draft or a software component, which poses advanced requirements on the platform’s functionalities. For example, (a) functionalities for attracting skilled workers willing to collaboratively participate in the crowd work process include various possibilities for targeted incentives, which are not limited to a monetary reward. A distinction for incentives can be made between extrinsic and intrinsic motivation [8,9,10,11,12]. Furthermore, (b) functionalities for the management of the joint work process must be implemented. In the whole crowd work process, especially if not only individual jobs are assigned, but many workers shall work together on a project, coordination and communication are substantial and need to be supported by a set of suitable functionalities. In addition, (c) the crowd needs to be guided well in order to control for high quality reliable work results. Crowd work platform designers and providers thus face a multitude of design challenges and no common standard has evolved in this dynamic domain yet on how to design effective crowd work platforms, especially when it comes to more complex tasks that require collaboration of crowd workers. These challenges appear from a platform designers point of view.

To address (a), (b) and (c), design principles could guide platform designers towards suitable solutions for supportive functionalities for digital crowd collaboration. As a structured collaboration process has already been in the scope of research (see [13, 14]), we extend this stream of research by deriving design principles to implement collaborative crowd work platforms. Previous reviews on crowdsourcing literature have made contributions to shed light on this rising domain, in particular with respect to the general crowdsourcing field, but without addressing the specifics of collaborative work in detail and only cover the literature until January 2012 [15]. These efforts have a focused scope on collaboration between crowd workers, however they do not consider literature from relevant related fields, e.g. open innovation and social computing [13] and lack an operationalization of findings towards design principles. Therefore, we address this knowledge gap by building on and extending this stream of work with respect to design knowledge for collaboration support from a broad and up-to-date literature basis. Additionally, we derive helpful design principles that should be utilized when designing the functionalities for collaboration support on crowd work platforms. With respect to a, b, and c, we guide our research by the following research questions: Q1: What kind of incentive mechanisms can be used to promote collaboration on crowd work platforms? Q2: How can the collaborative work process between different actors on the platform be managed via functionalities? Q3: What kind of guidance or scripts can crowd work platforms implement to control the activities of the crowd or guide them to a specific direction?

To answer these research questions, we conduct a systematic literature review by following the guidelines of vom Brocke et al. [16]. The aim is to consolidate the literature to set up a basis in order to derive suitable design principles from the literature to answer Q1, Q2, and Q3.

The structure of this paper is as follows: First, we present the methodology for the systematic literature review. Second, we consolidate the findings to present the status quo of research and categorize them in implications for platform functionalities for incentive structures, the management of the work process and for crowd control. Third, we derive meta-requirements, formulate corresponding design principles, discuss and consolidate the current literature to answer Q1, Q2 and Q3. We close this paper with a conclusion.

2 Methodology and Scoping of the Literature

This paper presents the state of research on crowd work platforms with respect to the challenge of designing effective platform functionalities for complex, collaborative tasks. More specifically, we analyze incentive structures, the management of the work process, and crowd control. Therefore, we follow the guidelines of vom Brocke et al. [16] for a comprehensive search process to ensure the completeness and thoroughness. First, the scope of the review is defined. The analysis covers publications from different areas: crowdsourcing, crowd work, mass collaboration and open innovation. In our study, we refer to the following conceptualizations of the terms:

“In crowdsourcing, a Crowdsourcer (e.g., a company, an institution, a group, or an individual) proposes a task via an open call to an undefined amount of potential contributors (crowd workers)” [1, 17]. Crowd work is based on the concept of crowdsourcing with the difference that the potential contributors/crowd workers are in a gainful employment [1]. Mass collaboration is characterized by the involvement of a large group of people (as a mass) e.g. the crowd, the usage of digital tools e.g. crowd work platform and the digital outcome they produce together [18]. Chesbrough and Bogers [19], define “open innovation as a distributed innovation process based on purposively managed knowledge flows across organizational boundaries, using pecuniary and non-pecuniary mechanisms in line with the organization’s business model” [19]. By considering these domains, we are open to concepts for platform solutions from different fields potentially dealing with distributed collaboration via platforms to inform our analysis.

We used the search string: ((“crowd work*” OR “crowdso*” OR “mass collaborati*” OR “open innovation”) AND (“platform*”)). This search string includes the logical OR operator, the logical AND operator and the free variable parameter*. The logical operators provide the correct relation among the substrings and the free variable parameter considers the string to be a substring of any other string, e.g. crowdso* considers crowdsourcing as well as crowdsource. With the logical AND operator, the first part of the term will only be a match in combination with platform*. The decision to specify the search towards papers containing the term “platform*” was made to exclude the vast number of crowd work publications outside the scope of design research for crowd platforms, e.g. concerning crowd work business models. With this focus, we aimed to identify papers that have platform design or functionalities within the core of their contribution. Consequently, a wide range of literature is considered and literature with different focus than crowd work platforms is excluded.

We considered the six databases listed in Table 1 due to their relevance for high-quality peer-reviewed information systems research and searched in each database in title, keywords and abstract. Table 1 also shows the number of results found in each database in October 2018 when we conducted the search (S1).

Table 1. Considered databases and search results

In step 2 (S2), by reviewing title, keywords and abstract of these 3155 publications, we could reduce the amount to 127 papers for in depth analysis. Most of the excluded papers had been selected by the search string, because the term “platform” was used in another context unrelated to crowd work. Reasons for exclusion covered paper foci other than supporting crowd worker collaboration or platform functionalities unrelated to influencing crowd worker behavior and interaction. Excluded were articles with the main goal to define phenomena in the crowdsourcing domain or to technically deploy platforms and features. We also excluded publications that targeted optimization of platform functionalities such as task breakdowns or task integration that happen without any direct crowd worker involvement. Task breakdown is a highly relevant topic in the crowdsourcing field, but it mostly focuses on how to break down tasks so that crowd worker can accomplish them, not on crowd collaboration. Considering the fact that crowd workers are in the center of each of our research questions, we excluded those articles. Eleven additional articles were added while conducting a backward and a forward search (S3) in these 127 publications. After a thorough analysis of the 137 full texts, 27 publications were considered for the study at hand. In this step, papers were excluded because the articles did not include any information that could contribute to answering our research questions. In particular, publications are not reported within the scope of this paper, that do not discuss aspects related to the three categories outlined in the introduction (incentive structures, management of the work process, crowd control) or do not report insights that can be used to derive design implications for platform functionalities.

Only the selected 27 publications are included in the following analysis and are reflected in our findings. Table 2 summarizes the considered literature and the results of the review. We identified, which of the three specified topics (Management of work process (WP), incentive system (IS), crowd control (CC)) were addressed by each publication. Some of the publications addressed two of the topics and are listed twice in Table 2.

Table 2. Examined articles categorized by management of work process (WP), incentive system (IS), crowd control (CC)

3 Consolidation of the Literature: Status Quo of Research on Platform Functionalities

In this section, we consolidate the literature in light of the three guiding research questions (Q1, Q2 and Q3) and discuss their impact on platform design in relation to incentive structures, management of the crowd work process and crowd control. Furthermore, in this section we derive meta-requirements (MR) and corresponding design principles (DP) by referring to the literature.

3.1 Platform Functionalities for Incentive Systems

More and more crowdsourcing and crowd work platforms are being created, which are used in different domains to get work done by the crowd. However, to motivate the crowd in the long term, incentive structures that meet the requirements of all actors involved must be created. This section shows, which platform functionalities the literature suggests for this purpose.

In order to determine, which incentive structures should be implemented, it is necessary to understand the effect of different structures and how to motivate the crowd first. After all, one’s motivation has an impact on the quality of the contributions [41] (MR1). Thus, performance is often described as a function with the factors motivation and ability [9]: Performance = f (Motivation x Ability).

In the case of motivation, a distinction can be made between extrinsic and intrinsic motivation [8,9,10,11,12]. Hobbies, pleasure, and interests are intrinsic motivators, while extrinsic motivation delivers some compensation for work [11]. This can, e.g. be remuneration [9] or fulfilling the desire to learn and improve one’s own abilities [12]. Extrinsic motivation can be further divided into financial (e.g. money or job opportunities), social (e.g. knowledge or experience) and organizational (e.g. career prospects or responsibilities) motivators [11]. These types of motivation can have an impact on the submitted contributions. According to Frey et al. [9], intrinsic motivation may significantly increase the number of substantial contributions. A contribution is classified as substantial when it is both new and relevant. In contrast, extrinsic motivation may increase the number of non-substantial contributions [9].

It is also important to consider that there are different platform users who can have different motivations (MR2). For example, Schultheiss et al. [10] identified four different types of crowdsourcing users: the female creatives, the male technicians, the academics and the alternative all-rounders [10]. The exact differences in the motivation of the user groups are not exactly investigated, but such differences in the user groups have to be taken into account when implementing incentive systems. In addition, there may also be differences between different population groups and regions [24] (MR3). For example, in a survey, Americans viewed the earned money generated as additional extra income, while Indians (depending on the region in which they live) viewed the rewards as a primary source of income, with which they could buy basic supplies to survive [24].

For the platform providers, it is obviously ideal, if exclusively intrinsic motivation and non-monetary extrinsic motivation were sufficient for the crowd, so there would be no expenses for remuneration. Prior work within the analysis provides indication that crowd workers can be driven by both intrinsic and extrinsic motivation [12]. However, the motivation may change over time (MR4), if the platform is used for a long time. According to Soliman and Tuunainen [12] motivation factors such as monetary rewards or curiosity are of great importance when using a platform for the first time. Over time, these factors are losing importance and are getting replaced by social motivation factors such as enjoyment, altruism, non-monetary rewards, and publicity [12]. After all, the main reason to participate in online platforms is to help the community [42].

Incentives and motivators are strongly linked [11]. Incentives ensure sufficient motivation and thus represent an important factor for the cooperation in crowd work. An incentive does not always have to be monetary. Chittilappilly et al. [8] generally, distinguishes between two types of incentives: monetary incentives and non-monetary incentives. In this respect, non-monetary incentives mainly address the non-monetary types of motivation: fun and entertainment, personal development, competition as well as moral, purposive and material incentives in the form of points or credits [8] (Fig. 1).

Fig. 1.
figure 1

(based on [11] and [8])

Categorization of incentives

In particular, the last point is important for crowd work platforms. For completed tasks, crowd workers earn points that enhance their reputation and help with other job applications on the platform [8]. With a point system, a monetary reward can also be gained, for example, by exchanging points against awards [20]. However, monetary incentives are usually preferred by workers [8, 24]. The reward is generally paid only, if the work is accepted by the client, which can lead to a high rejection rate and anger among the workers. It is important to set the rewards correctly, as too much reward will lead to unnecessarily high costs and too little reward may result in crowd workers neglecting the task.

Chittilappilly et al. [8] distinguishes between two different types of monetary incentive systems, systems that do not consider the reputation of the worker and systems that consider the worker’s reputation, each of which is subdivided. Table 3 gives an overview of the categorizations with advantages and disadvantages of each method. The relevance-based model uses an incentive system based on the relevance of the task. In Harris [23] the task was to review resumes for a company. Four different variations of rewards were compared: (1) fixed reward for each review; (2) increased remuneration, if the review is identical to that of an expert (positive incentive); (3) reduced remuneration, if the review differs from that of an expert (negative incentive); (4) combination of the previous methods (payment + deduction). In this study, the positive incentive and the combined approach provide the best results [8] (MR5). The survival analysis method is used to determine the time t until a certain event occurs [22]. With the aid of this analysis, a recursive algorithm could be developed that returns the price, which is the least necessary wage to complete the task in desired time [8]. The desired time must therefore be set before each task. In the reputation-based method [29], crowd workers were considered who received the payment before completing the task. The free-riding problem was prevented by assigning orders based on the previous interaction with the crowdsourcing platform. In addition, there was a punishment system for low quality work. In order to improve quality, the upcoming tasks were assigned to the workers who had a better reputation.

Table 3. Pros and cons of monetary incentive models [8]

In the rating and reward dividing model [28] workers are heterogeneous. In the system, there is a rating scheme, a reward dividing system and other important crowdsourcing elements. New workers can only participate until a limit is reached, and since their work may be of low quality, there is a “reputation protocol”. To ensure that the employer does not benefit from the rejection of all submitted results (MR6), an administrator distributes the reward to all participating workers in case that all results are rejected.

A challenge is to find the right incentive type and a suitable amount of remuneration (MR7). With an increase in remuneration, the number of results submitted may increase significantly [25, 27] (MR8) and the quality of the best result also increases [27]. However, the quality of the average results does not necessarily increase [25, 27]. It is also possible that the workers are not paid per job, but in bulk for many tasks.

Ikeda and Bernstein [43] show that a bulk-payment after ten tasks under the conditions within their study increased the rate of the tasks completed while the quality was not significantly different [43].

Once different incentives have been identified, they must be implemented in the platform. Various algorithms are therefore presented in the literature for different purposes. In Tian et al. [26], for example, an algorithm is presented that encourages the crowd to do tasks in less popular areas (MR9) in mobile crowdsourcing. Another article deals with incentive mechanisms for crowd workers who perform binary tasks [44]. Incentive models for a single-requester single-bid model, a single-requester multiple-bid model, and a multiple-requester multiple-bid model are presented in Zhang et al. [45]. Xie et al. [28] propose an incentive system that is robust against human factors [28] (MR10).

Three more articles deal with the topic of mobile crowdsensing: Dai et al. [21] provide an incentive framework for this domain, where the employer performs a gambling process for checking the negative feedback in case of a poor assessment (MR11). A certain number of “players” are recruited who receive a small reward. Players then give feedback on data quality. If it turns out that the data quality of the submitted work is insufficient, the employer will be refunded. If this is not the case, the employer pays the wage, the platform fees, the gamification costs as well as a penalty for the wrong feedback. This framework can help motivate participants to contribute high-quality work, lead employers to truthful feedback, and can make the platform more profitable [21]. Zhang et al. [46] also provide three incentive mechanisms for mobile crowdsensing, one of which is to maximize the platform utilization and make recruitment decisions based on a control group. The other two aim to fulfil the criteria of truthfulness. Wen et al. [47] present a quality-oriented incentive mechanism for mobile crowdsensing. The algorithm can maximize the social welfare of all participants and lead to more submissions of high-quality data with less required computing power.

One potential conclusion we draw from the analysis concerning incentive systems is, that social motivation factors deserve special attention when designing collaborative crowd work, as crowd workers may need to interact more intensively and for a longer duration with other users on the platform. Monetary rewards, which are common in small individual task crowd work and in the studies we found, may fail to work in complex, collaborative settings, especially if goal achievement can hardly be attributed to individual workers. This assumption should be investigated in future research. Furthermore, it would be interesting to examine, whether collaborative crowd work platforms attract workers with different types of motivation than do platforms focused on individual contributions.

3.2 Platform Functionalities for the Management of the Work Process

For the crowd work process to run without major complications, a crowd work platform must be implemented with functions for the management of the work process. They should ensure good communication and interaction between the employer and the workers, but also promote collaboration between them. The claims of both sides must be considered to allow a fair exchange of the different actors. It is not trivial to identify, which specific functions should be implemented for this purpose and demands can vary from platform to platform. This section shows which platform functionality literature suggests for the management of the work process.

In software engineering, one major domain for crowd work, communication, coordination and collaboration in many different facets are key factors that connect successful teams. For example, requirements for the tasks, assessment criteria or the progress of the task are exchanged among actors (MR12) [33]. In terms of communication, the exchange of messages and information on the platform is inevitable so the employer can communicate requirements and conditions, and the worker may communicate technical and organizational problems. With respect to collaboration, the platform should, for example, synchronize the work progress of other users, if several people work on on the same task (MR13). In terms of coordination, the platform must support management at both technical and business level. Furthermore, intellectual property should be protected [33]. The comparison of different types of crowdsourcing platforms by Peng et al. [33] discloses several weaknesses of some of these systems, especially with respect to collaboration support, awareness and value transfer (see Table 4).

Table 4. Crowdsourcing support from various software development platforms [33]

In crowd work management, socio-technical dependencies must also be considered. According to Conway’s law, social structures, e.g. the team composition, have a great influence on the later technical structure of the product [34] (MR14).

Andersen and Mørch [30] investigated interaction patterns between end-users and developers in mass collaboration. Four different interaction patterns were identified (MR15). In the case of “gatekeeping”, an actor determines, which information is passed on in order to protect other stakeholders from unnecessary information. The “bridge builder” instead distributes the information to other stakeholders, whereby they receive the information very early and to a sufficient extent. “General development” is when a local solution of an end user is taken over by a developer and the solution becomes part of the product. If users need to make adjustments to fit the software to a particular situation, they speak of “user-user collaboration” as they make these adjustments supported by interaction with other end users [30].

The management of the work process also includes giving feedback (MR16). According to Dow et al. [31] the important decisions are, when feedback should be given and who should give this feedback. The feedback can be given directly during work execution or after completed work. Very early feedback means that there is very little time for the customer to provide feedback, which requires tools and algorithms for fast feedback generation. Feedback at a later stage gives the client more time to provide feedback, but a worker will not improve the work anymore subsequently. Even simple binary feedback may improve the results. The more detailed and personal the feedback, the more the workers can learn, but this costs time and money. As long as a worker does not complain, there is often no additional feedback, which goes beyond “accepted” or “rejected”. In the case of feedback generation, it is a good idea to select the customer to provide feedback. However, they cannot always get involved in the problems of the workers. Alternatively, workers can be paid to evaluate other workers (peer feedback). According to Yang et al. [35] identifying valuable workers (MR16) based on the existing data and recruiting them for the feedback would be one solution. To this end, Dow et al. [31] have designed a system that supports peer feedback. The feedback is sent to the worker as soon as he begins a new task of the same kind. The quality of the following results is increased by the feedback [31].

3.3 Platform Functionalities for Crowd Control

For crowd work platforms to be effective, mechanisms are necessary to control the crowd. This includes both the review of completed work and steering the crowd consciously. Efficient verification mechanisms such as quality control and methods for controlling the crowd activities are required. For example, can users be guided in a direction that is of advantage for the task completion? This section discusses, which platform features are suggested in literature for controlling the crowd.

Crowd sensing often involves the problem of an unequal distribution of workers. To counteract this problem, Tian et al. [26] describe a mechanism that encourages the crowd through intelligent task assignment to take over tasks in less popular areas. It is represented in pseudo code and can be implemented in platforms. All parties can benefit from this opportunity to control the crowd, as more tasks may be completed, the workers earn more money and the platforms get more advertising revenues, mediation fees or at least publicity due to the higher number of completed tasks [26].

Ankolekar et al. [20] describe the crowdsourcing platform MET (Market for Enterprise Tasks). By completing tasks, the workers receive so-called MET coins, a virtual currency that can be exchanged for real dollars or other goods. The employer can place a maximum reward for a specific task on the platform. The workers can then indicate and bid how much the task is worth. Afterwards, the employer chooses the winner of the bid who will perform the task. Due to the possibility of changing the exchange rate from MET coins to real world dollars, there are possibilities to motivate workers differently in various situations. The platform therefore offers an incentive system linked to real money with very precise market regulation possibilities [20]. These are very helpful for platforms, but they can be non-transparent and arbitrary for the workers. However, MET uses concrete elements for controlling the market and thus controlling the crowd. In order to control the market, a maximum reward can be defined for each task. In addition, a maximum reward per worker can be set for a specific period. Since the rewards of the completed tasks are paid out in so-called MET coins, the exchange rate of the MET coins to dollars or other goods obtainable for the MET coins can be determined and changed for further market control (MR18). This results in a number of ways to closely monitor and control the market.

In order to control the crowd, it is helpful to predict how well the quality of a solution will be. To make such a prediction, the required knowledge, skills and other characteristics, such as motivation or personal attitude, to solve the task must be known. Given that, as described in Hassan and Curry [36], task performance can be relatively accurately predicted [36] (MR19). Workers can be clustered based on the probability to submit good results. A worker can also be assigned a confidence interval, for example the worker X1 can have a confidence interval of [0.4, 0.8] for successful task completion. The ability to estimate performance offers different advantages. Tasks can only be assigned to workers with specific traits. If active feedback is given to the workers, the overall quality of the performance can be increased by such predictions, and the feedback can even serve as a motivation factor [36].

Mok et al. [32] describe how workers delivering poor quality work can be identified in the field of crowd testing. Most of them would like to get paid for minimal effort. By identifying these people, the reliability of crowd testing can be significantly increased. Various factors such as start times, breaks, or the number of clicks are used to identify work behaviours. However, since mouse movements and mouse clicks are analysed, this method cannot be transferred without problems to all types of crowdsourcing or crowd work. Therefore, other methods, e.g. identifying fraud by analysing the order of answers to specific questions [48] or analysing user behaviour to predict the quality of work [49] (MR20), are possible concepts platform designers could adopt. Furthermore, the framework of Gomez and Laidlaw [50] proposes an approach to collect user interaction data from which an estimated completion time for results can be used. In addition, factors such as completion time, working time in the individual phases (such as “read” or “answer questions”) and the time of observation can be analysed, for a prediction of the quality of the work results [37]. All of these works aim at assessing the quality and are thus important for controlling the crowd.

Another way of recognizing workers who deliver low quality work is a qualification test. In addition, there are methods to recognize bad or careless workers, so-called spammer, based on the results submitted. Workers with low quality values or spammers can be excluded from the platform, which is already a common method for quality improvement [38].

Naderi et al. [39] investigate the reliability of responses in the field of crowdsourcing. In one study, a method was used that is non-noticeable to the worker, while in another study both noticeable and non-noticable methods were used. In the study with the obvious method, workers gave more consistent answers. When users know that their work is being reviewed, they respond more reliably [39]. This can be exploited for crowd work platforms by making quality checks transparent to their users and use them both as selection mechanisms as well as steering tools to enforce reliable behaviour (MR21).

The work of Abhinav et al. [40] presents an intelligent assistant for besides the crowd worker to support them with recommendation and guidance. Platforms could use artificial intelligence in form of virtual intelligent assistants to shape crowd workers’ behaviour in a direction that is beneficial for the platform (MR22). This approach could be used to influence workers’ behaviour and would be one way to control the crowd.

3.4 Design Principles

By consolidating the current literature, we derived 22 meta-requirements. Based on the scoping of the literature, MR1-11 refer to the design of incentive systems, MR12-16 to work process and MR17-22 refer to crowd control. With reference to MR1-MR22, we formulated eight action oriented design principles (Table 5) according to Chandra et al. [51]. Some of the MRs were considered more than once to formulate the design principles. Even though, we assigned the DPs to one of the scopes IS, WP or CC (Table 5), the DPs are not exclusively limited to our assignment. Therefore, the DPs can have effects and overlap with more than one of them.

Table 5. Meta-requirements (MR) and the derived design principles (DP) for collaboration on crowd work platforms.

The design principles serve as a basis to consider collaborative task accomplishment by platform designers and developers.

4 Discussion and Conclusion

Based on the analysis of the reviewed literature and the derivation of meta-requirements, we formulated design principles for designers and providers of crowd working platforms that may improve the incentive mechanisms, management of the work process, and crowd control for collaborative crowd work.

Concerning incentive structures (as in Q1), it turned out to be important to understand how people are motivated and how incentives may trigger the motivation of crowd workers. For crowd work platforms this knowledge can be used to motivate workers, depending on the specific field of activity and crowd worker characteristics. In this paper, the various aspects of labour motivation are consolidated and suggestions are derived for design. For example, if workers are mainly working out of intrinsic motivation, this motivation can be further promoted, e.g. by making the work more entertaining and pleasurable for e.g. with gamification, learning and social factors. As a result, a remuneration may be less important to motivate workers, since individuals have fun with the task.

The reviewed studies also suggest that all presented incentive systems have advantages and disadvantages and must be cautiously adopted to different conditions. Using the incentive options presented in this paper (also in scope of DP1, DP2, DP3) and combining it with their domain knowledge, the best individual solution can be identified for a platform.

Regarding the second topic under study, management of the work process (as in Q2), this work identifies interaction patterns, and describes the process of generating feedback in a consolidated way based on different views in literature. Crowd work platforms, which depend on the collaboration between individual crowd workers or aim to extend collaboration functionality towards more complex tasks, can check whether their platform meets the requirements and possibly adapt functionalities (DP4 and DP5) presented here.

With respect to Q3, control mechanisms for the crowd workers and the market are explained and methods for predicting the worker quality are described. In particular, we found transparent quality tests to be promising means to improve quality, e.g. by considering different human factors such as skills, motivation and attitude to profile and predict the crowd workers qualitative level of work or by considering other factors such as start times, breaks and number of clicks to identify poor quality. Also in the aim for crowd control, mechanisms building on user collaboration, such as user evaluation of contributions, are strikingly prevalent. Through this knowledge, platforms can more accurately and with limited resource commitment anticipate worker quality and adapt their algorithms accordingly (DP6, DP7 and DP8).

In sum, we were able to answer Q1, Q2 and Q3. This paper consolidates the current state of research on platform functionalities in the field of collaborative crowd work for the three key areas described. The review excluded other aspects of crowd work platforms beyond the three guiding research questions. For example, many articles in the literature deal with task matching, which has not been considered in our analysis with a focus on collaboration. Nevertheless, task matching is a topic which is of course relevant for crowd work platforms. Through efficient task matching, competitive advantages can be increased. How the crowd work platforms can use task-matching in collaborative tasks and how well implemented algorithms are in practice have to be examined more closely in future work. In addition, there are other challenges and problems in crowd work that deserve to be addressed, but are outside the scope of this work. For example, there is a research gap concerning working conditions [52].

The question of how crowd workers should be best supported to work together most effectively within a project that requires collaboration is not answered by the current state of the literature in detail. While we found insights on each of the three fields of interest, the approaches until now are still in an explorative state or have been applied in very specific domains. Additionally, while we only used publications that explicitly address collaboration in crowdsourcing, most of the work does not specifically distinguish the unique demands that result from collaborative tasks. To address this issue, further research needs to be done. Moreover, we could not identify decision criteria for choosing suitable functionalities to promote the collaboration on crowd work platforms in the current state of research. We propose the following avenues for future research: How does combining different functionalities on crowd work platforms affect the collaboration process on platforms? What kind of behaviour is fostered by adopting different functionalities on the platform? How should functionalities effectively be implemented on the platform? Do the functionalities show impact distinction in different domains or target groups with respect to the interaction beyond the individual?

This paper aims to contribute with prescriptive knowledge according to Gregor and Hevner [53] towards a “theory of design and action” [54] with a set of derived meta-requirements and corresponding design principles to guide platform designers/developers to consider for collaboration on crowd work platforms.