Skip to main content
Log in

Exploring a Convergence Technique on Ideation Artifacts in Crowdsourcing

  • Published:
Information Systems Frontiers Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Convergence is a collaborative activity in which members of group focus on what they consider the most promising or important contributions resulting from an ideation activity. Convergence is critical in helping a group focus their efforts on issues that are worthy of further attention. In the current study, we further research in this area by exploring and characterizing the effects of a particular convergence intervention, the FastFocus technique, in the context of a crowdsourcing project. We conducted an exploratory case study of artifacts generated by a crowd of managers addressing a real problem identification and clarification task in a large financial services organization. Using an online crowdsourcing tool, a professional facilitator led participants during preset periods through a convergence activity that focused on the brainstorming contributions that had been generated prior. To better understand the effects of the convergence technique on the group’s ideas, we compared the raw problem statements to the final output of the convergence activities in terms of the number of unique ideas present, as well as the ambiguity of the ideas. Using the FastFocus convergence technique reduced the number of concepts by 76%. Ambiguity was reduced from 45% in the set of problem statements to 3% in the converged set of problem statements. We demonstrate with these findings that the outcomes of group convergence processes in real settings can be measured, enabling future research which seeks to evaluate and understand convergence in groups. Aspects of brainstorming instructions were also identified that may make it possible to reduce the ambiguity of problem statements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Agres, A., de Vreede, G, J., & Briggs, R, O. (2004). A tale of two cities: case studies of GSS transition in two organizations. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences (pp. 10-pp). IEEE.

  • Arias, E., Eden, H., Fischer, G., Gorman, A., & Scharff, E. (2000). Transcending the individual human mind—Creating shared understanding through collaborative design. ACM Transactions on Computer-Human Interaction (TOCHI), 7(1), 84–113.

    Article  Google Scholar 

  • Badura, V., Read, A. S., Briggs, R. O., & de Vreede, G. J. (2011). Coding for unique ideas and ambiguity: A method for measuring the effect of convergence on the artifact of an ideation activity. International Journal of Social and Organizational Dynamics in Information Technology, 1(3), 1–17.

    Article  Google Scholar 

  • Bittner, E. A. C., & Leimeister, J. M. (2014). Creating shared understanding in heterogeneous work groups: Why it matters and how to achieve it. Journal of Management Information Systems, 31(1), 111–144.

    Article  Google Scholar 

  • Brabham, D. C. (2008). Crowdsourcing as a model for problem solving: An introduction and cases. Convergence, 14(1), 75–90.

    Article  Google Scholar 

  • Bragge, J., Merisalo-Rantanen, H., & Hallikainen, P. (2005). Gathering innovative end-user feedback for continuous development of information systems: A repeatable and transferable e-collaboration process. IEEE Transactions on Professional Communication, 48(1), 55–67.

    Article  Google Scholar 

  • Briggs, R. O., de Vreede, G. J., & Nunamaker Jr., J. F. (2003). Collaboration engineering with ThinkLets to pursue sustained success with group support systems. Journal of Management Information Systems, 19(4), 31–64.

    Article  Google Scholar 

  • Chen, H., Hsu, P., Orwig, R., Hoopes, L. A. N. C. E., & Nunamaker, J. F. (1994). Automatic concept classification of text from electronic meetings. Communications of the ACM, 37(10), 56–74.

    Article  Google Scholar 

  • Cheng, X., Fu, S., de Vreede, T., de Vreede, G. J., Seeber, I., Maier, R., & Weber, B. (2020). Idea convergence quality in open innovation crowdsourcing: A cognitive load perspective. Journal of Management Information Systems, 37(2), 349–376.

    Article  Google Scholar 

  • Chesbrough, H, W. (2003). Open innovation: The new imperative for creating and profiting from technology. Boston: Harvard Business Press.

  • Davenport, T. H., & Völpel, S. C. (2001). The rise of knowledge towards attention management. Journal of Knowledge Management, 5(3), 212–222.

    Article  Google Scholar 

  • Davis, A., de Vreede, G, J., & Briggs, R, O. (2007). Designing thinkLets for convergence. Proceedings of the 13th Annual Americas Conference on Information Systems (AMCIS-13), Keystone.

  • de Vreede, G. J. (2014). Two case studies of achieving repeatable team performance through collaboration engineering. MIS Quarterly Executive, 13(2), 115–129.

    Google Scholar 

  • de Vreede, G, J., & Briggs, R, O. (2005). Collaboration engineering: Designing repeatable processes for high-value collaborative tasks. In Proceedings of the 38th Annual Hawaii International Conference on System Sciences (pp. 17–27). IEEE.

  • de Vreede, G. J., & Briggs, R. O. (2019). A program of Collaboration Engineering Research & Practice: Contributions, insights. and Future Directions, Journal of Management Information Systems, 36(1), 74–119.

    Article  Google Scholar 

  • de Vreede, G. J., Kolfschoten, G. L., & Briggs, R. O. (2006). ThinkLets: A collaboration engineering pattern language. International Journal of Computer Applications in Technology, 25(2/3), 140–154.

    Article  Google Scholar 

  • Dennis, A. R., & Wixom, B. H. (2002). Investigating the moderators of the group support systems use with meta-analysis. Journal of Management Information Systems, 18(3), 235–257.

    Article  Google Scholar 

  • Dennis, A. R., Minas, R. K., & Bhagwatwar, A. P. (2013). Sparking creativity: Improving electronic brainstorming with individual cognitive priming. Journal of Management Information Systems, 29(4), 195–216.

    Article  Google Scholar 

  • Easton, G. K., George, J. F., Nunamaker Jr., J. F., & Pendergast, M. O. (1990). Using two different electronic meeting system tools for the same task: An experimental comparison. Journal of Management Information Systems, 7(1), 85–100.

    Article  Google Scholar 

  • Eppler, M. J., & Mengis, J. (2004). The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The Information Society, 20(5), 325–344.

    Article  Google Scholar 

  • Fjermestad, J., & Hiltz, S. R. (1998). An assessment of group support systems experimental research: Methodology and results. Journal of Management Information Systems, 15(3), 7–149.

    Article  Google Scholar 

  • Fjermestad, J., & Hiltz, S. R. (2000). Group support systems: A descriptive evaluation of case and field studies. Journal of Management Information Systems, 17(3), 115–159.

    Article  Google Scholar 

  • Hansen, M. T., & Birkinshaw, J. (2007). The innovation value chain. Harvard Business Review, 85(6), 121.

    Google Scholar 

  • Helquist, J. H., Kruse, J., Deokar, A., & Meservy, T. O. (2013). Enabling large group collaboration. In J. F. Nunamaker, N. C. Romano, & R. O. Briggs (Eds.), Collaboration systems: Concept, value, and use (pp. 128–142). ME Sharpe.

  • Hilmer, K, M., & Dennis, A, R. (2000). Stimulating thinking in group decision making. In Proceedings of the 33rd Annual Hawaii International Conference on System Sciences (pp. 10–pp). IEEE.

  • Ho, L. A., Kuo, T. H., Lin, C., & Lin, B. (2010). The mediate effect of trust on organizational online knowledge sharing: An empirical study. International Journal of Information Technology & Decision Making, 9(04), 625–644.

    Article  Google Scholar 

  • Howe, J. (2008). Crowdsourcing: Why the power of crowd is driving the future of business. Crown Business.

  • Johnson, T. E., & O'Connor, D. L. (2008). Measuring team shared understanding using the analysis-constructed shared mental model methodology. Performance Improvement Quarterly, 21(3), 113–134.

    Article  Google Scholar 

  • Kennel, V., Reiter-Palmon, R., de Vreede, T., & de Vreede, G, J. (2013). Creativity in teams: An examination of team accuracy in the idea evaluation and selection process. In In Proceedings of the 46th Hawaii International Conference on System Sciences (pp. 1–10). IEEE.

  • Kolfschoten, G. L., & Brazier, F. M. (2013). Cognitive load in collaboration: Convergence. Group Decision and Negotiation, 22(5), 975–996.

    Article  Google Scholar 

  • Kolfschoten, G. L., de Vreede, G. J., Briggs, R. O., & Sol, H. G. (2010). Collaboration ‘engineerability. Group Decision and Negotiation, 19(3), 301–321.

    Article  Google Scholar 

  • Kolfschoten, G. L., Lowry, P. B., Dean, D. L., de Vreede, G. J., & Briggs, R. O. (2014). Patterns in collaboration. In J. F. Nunamaker Jr., N. C. Romano Jr., & R. O. Briggs (Eds.), Collaboration systems: Concept, value, and use. M.E. Sharpe, Inc..

  • Leonardi, P. M. (2014). Social media, knowledge sharing, and innovation: Toward a theory of communication visibility. Information Systems Research, 25(4), 796–816.

    Article  Google Scholar 

  • Limayem, M., & DeSanctis, G. (2000). Providing decisional guidance for multicriteria decision making in groups. Information Systems Research, 11(4), 386–401.

    Article  Google Scholar 

  • McLain, D. L., & Aldag, R. J. (2009). Complexity and familiarity with computer assistance when making ill-structured business decisions. International Journal of Information Technology & Decision Making, 8(03), 407–426.

    Article  Google Scholar 

  • Nguyen, C., Tahmasbi, N., de Vreede, T., de Vreede, G. J., Oh, O., & Reiter-Palmon, R. (2015). Participant engagement in community crowdsourcing, European Conference on Information Systems. Muenster.

  • Oh, O., Nguyen, C., de Vreede, G. J., & Derrick, D. C. (2012). Collaboration science in the age of social media: A crowdsourcing view, Proceedings of Group Decision & Negotiation 2012. Recife.

  • Osborn, A. F. (1963). Applied imagination: Principles and procedures of creative problem-solving. Scribner.

  • Paulus, P. B., Kohn, N. W., Arditti, L. E., & Korde, R. M. (2013). Understanding the group size effect in electronic brainstorming. Small Group Research, 44(3), 332–352.

    Article  Google Scholar 

  • Pisano, G. P., & Verganti, R. (2008). Which kind of collaboration is right for you. Harvard Business Review, 86(12), 78–86.

    Google Scholar 

  • Poesio, M. (1995). Semantic Ambiguity and Perceived Ambiguity. In K. Van Deemter & S. Peters (Eds.), Semantic ambiguity and Underspecification (pp. 1–47). CSLI.

  • Reinig, B. A., Briggs, R. O., & Nunamaker, J. F. (2007). On the measurement of ideation quality. Journal of Management Information Systems, 23(4), 143–161.

    Article  Google Scholar 

  • Santanen, E, L., & de Vreede, G. J. (2004). Creative approaches to measuring creativity: comparing the effectiveness of four divergence thinklets. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences (pp. 1–10). IEEE.

  • Santanen, E. L., Briggs, R. O., & de Vreede, G. J. (2004). Causal relationships in creative problem solving: Comparing facilitation interventions for ideation. Journal of Management Information Systems, 20(4), 167–198.

    Article  Google Scholar 

  • Santiago Walser, R., Seeber, I., & Maier, R. (2019). Designing a digital nudge for convergence: The role of decomposition of information load for decision making and choice accuracy. AIS Transactions on Human-Computer Interaction, 11(3), 179–207.

    Article  Google Scholar 

  • Seeber, I., Maier, R., de Vreede, G. J., & Weber, B. (2017a). Beyond brainstorming: Exploring convergence in teams. Journal of Management Information Systems, 34(4), 939–969.

    Article  Google Scholar 

  • Seeber, I., Merz, A., de Vreede, G. J., Maier, R., & Weber, B. (2017b). Convergence on self-generated vs. In Crowdsourced ideas in crisis response: Comparing social exchange processes and satisfaction with process, Proceedings of the 50th Hawaiian International Conference on System Science. IEEE Computer Society Press.

  • Simon, H. A. (1972). Theories of bounded rationality. Decision and organization, 1(1), 161–176.

    Google Scholar 

  • Surowiecki, J. (2004). The wisdom of crowds. Doubleday.

  • Tanoglu, I., Basoglu, N., & Daim, T. (2010). Exploring technology diffusion: Case of information technologies. International Journal of Information Technology & Decision Making, 9(2), 195–222.

    Article  Google Scholar 

  • Tarrell, A., Tahmasbi, N., Kocsis, D., Tripathi, A., Pedersen, J., Xiong, J., Oh, O., & de Vreede, G. J. (2013). Crowdsourcing: A snapshot of published research (pp. 15–17). AMCIS 2013.

  • Winkler, R., Briggs, R, O., de Vreede, G, J., Leimeister, J, M., Oeste-Reiss, S., Söllner, M. (2020). Modeling Support for New Forms of Collaborative Work Practices – The Facilitation Process Model 2.0, IEEE Transactions on Engineering Management. https://doi.org/10.1109/TEM.2020.2975938.

  • Woolley, A. W., Chabris, C. F., Pentland, A., Hashmi, N., & Malone, T. W. (2010). Evidence for a collective intelligence factor in the performance of human groups. Science, 330(6004), 686–688.

    Article  Google Scholar 

Download references

Acknowledgments

We are grateful for the hard work of Victoria Badura and Aaron Read on the collection and analysis of the data. We also appreciate the constructive suggestions from the review team.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gert-Jan de Vreede.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1: Rules of Disaggregation

Definitions:

  • PROBLEM: A problem is a desired state or outcome that has not yet been attained (e.g. Our customers do not feel satisfied, although we want them to).

  • SYMPTOM: A symptom is some unacceptable condition that implies some desired state or outcome that has not yet been attained (e.g. Customers are returning products)

Rules:

1. Identify Verbs and Nouns

  • UNIQUE NVO: Each unique noun/verb/object combination that identifies a state or outcome that has not yet been attained will be disaggregated into simple problem statements.

  • ACCEPT SYMPTOMS: identifying a symptom is an important aspect of framing a problem, so they are acceptable as problem statements and disaggregated using the same rules.

  • MEANINGLESS VERB: Objects will not be disaggregated when doing so renders the verb meaningless. E.G. “We feel torn between our duties to home and work.” We cannot disaggregate the problem to “we feel torn between our duties to home” and “we feel torn between our duties to work,” because doing so renders the concept, “feeling torn between” meaningless.

  • GUIDLINE: Any time you must infer missing words to make a complete noun-verb-object problem statement, add them explicitly to the problem statement in parenthesis.

  • ACCEPT REDUNDANCY: If people say the same thing in multiple ways in the same contribution, disaggregate both wordings into simple problem statements. Redundancy will be removed in a later activity. Tag it as redundancy for later review.

2. Break Phrases

  • BREAK OUT FIRST CAUSES from CAUSE-AND-EFFECT: When presented with a causal chain, disaggregate first causes into stand-alone problem statements. For example, in the contribution, “Under-staffing leads to overwork which leads to low morale,” “(we have) understaffing” would be disaggregated as a separate problem statement.

  • DISTRIBUTE CAUSES: Distribute first causes across their consequent problem statements to make stand-alone problem statements. Add the first cause to the problem statement in parentheses so that the ideas can be understood in subsequent analysis steps. e.g. (rushed work) causes low satisfaction. For example, in the contribution, “Under-staffing leads to overwork which leads to low morale,” The understaffing cause would be paired with the overwork effect, and overwork as a cause would be paired with low morale, as follows: “(Understaffing) leads to overwork, and “(Overwork leads to) low morale.” Thus, this contribution would be broken out into two problem statements.

  • MULTIPLE CAUSES: All first causes must be combined when distributing across consequent problems, because we do not know whether either of the causes would invoke the effect on its own. For example, in the statement, “Understaffing and overwork cause low morale,” it is not possible to know whether understaffing or overwork each cause low morale, or whether both together cause low morale. Therefore, in this case, both Understaffing and Overwork would be broken out as first causes (see above) but low morale would be broken out, “(Understaffing and overwork) cause low morale. Thus, this contribution would be disaggregated into three problem statements, but they would differ from the three statements illustrated in the previous rule.

  • NO THREE DEEP: Distributed causes will not span more than one cause and one effect - there will be no causal chains of three or more clauses in the disaggregated problem statements.

  • RETAIN DEPENDENT CLAUSES - don’t break out dependent clauses as separate problem statements unless they are part of a causal chain (Dependent clauses explain what, how or when) (the dependent clause may contain a problem statement when it begins with “to”).

3. Determine Ambiguity

  • BRACKET AMBIGUITY: If you find the language of the contribution allows multiple grammatically-sound interpretations that could lead to different disaggregation structures, depending on which interpretation you adopt, make your best interpretation, disaggregate accordingly, but put your responses in brackets. Explain the ambiguity by stating at least two possible grammatically sound interpretations allowed by the wording of the original contribution.

  • BRACKET SOLUTIONS DISGUISED AS PROBLEMS: If a problem statement is rhetorically stated as a solution, break it out as a problem statement and bracket it. It is not possible to reliably distinguish solutions from problems during disaggregation. They can be sorted out in a subsequent activity. Explain your brackets.

  • IGNORE THE POSITIVE: Do not include positive clauses or phrases in the set of disaggregated problem statements.

  • RHETORICAL QUESTIONS: If a statement contains a rhetorical question that can be re-framed as a problem statement, mark the contribution as ambiguous and re-frame it as a problem statement, and disaggregate it.

  • Resolve Conflicts and Ambiguities

  • NEGOTIATE. Discuss the possible interpretations. If one is clearly more plausible ACCEPT that interpretation and disaggregate accordingly.

  • SYNTHESIZE a better interpretation together.

  • DISALLOW the contribution as too ambiguous if it is not possible to determine which interpretation is more plausible, DO NOT DISAGGREGATE the contribution.

  • ARBITRATE. If coders discover that they have an irreconcilable disagreement about the meaning of the contribution, submit it to a third coder for resolution

Appendix 2. FreeBrainstorm thinkLet

Overview.

In FreeBrainstorming, each participant works on a separate (electronic) page. Participants generate ideas in response to a single question or prompt. They contribute anonymously. After each contribution, a team member must switch to a different page, which may contain ideas already contributed by other participants.

Choose this thinkLet to:

  • Generate a wide variety of ideas; to push participants for breadth.

  • Overcome reluctance to contribute ideas that may be unpopular.

  • Mitigate information overload that can occur when teams of 5 or more people generate many ideas.

Results.

With FreeBrainstorm, participants generate a broad range of ideas that are distributed across a number of different pages. Contributions typically are a mix of good ideas, bad ideas, commentary on ideas, and a small number of off-topic comments. People who begin the activity with narrow, individual understandings of the problem finish with broader understandings of the problem.

Preparation.

You will need to develop a carefully-worded brainstorming question to complete this thinkLet successfully. Your team also needs the following capabilities:

  • A separate (electronic) page for each participant.

  • A way for each participant to view and add ideas to the page s/he works on.

  • A way for participants to switch pages.

  • A communication channel for a facilitator to explain the brainstorming question.

In ThinkTank, a FreeBrainstorm can be set up as follows:

  • In the Brainstorming module, add as many discussion sheets (i.e. categories) as there are participants.

  • Add an extra discussion sheet for each 10 participants in the group.

  • Configure the module such that participants can add ideas anonymously to different discussion sheets, but they cannot delete or edit sheets or ideas.

Execution Rules.

The key execution rules for leading a FreeBrainstorm activity are as follows:

  • The facilitator explains brainstorming question.

  • The facilitator explains how participants can select a discussion sheet and add an idea.

Participants then start at a random discussion sheet and can do one of three things:

  • Immediately contribute a new idea.

  • Read the ideas already on the sheet as inspiration for a new idea.

  • Comment on an existing idea or group of ideas already on the sheet.

  • After making a contribution, participants should switch to the next discussion sheet and again do one of the three things mentioned above.

  • This process continues until the allocated time is up or participants have run out of inspiration.

Appendix 3. FastFocus thinkLet

Overview.

During FastFocus, the participants each browse through a subset of all brainstorming ideas in several rounds. In the first round, each participant in turn proposes a key idea from their set to be included in the final set. The team discuss the meaning and the wording of a proposed item, if necessary. The facilitator adds a concise, clear expression of the idea to the public list, which represents the final set. After each participant has contributed their first idea to the final set, they switch to a different subset of all brainstorming ideas and a new round ensues. A team typically needs about three or four rounds to identify all ideas that they feel should be included in the final set.

Choose this thinkLet to:

  • Quickly extract a thorough, clean, non-redundant list of key issues or ideas at a useful level of abstraction from the results of an idea generation.

  • When it is important to assure that group members agree on the meaning and phrasing of the items on the resulting list.

  • When it important to extract all the useful ideas from the set, not just the most popular ideas, or the single best idea.

Results.

With FastFocus, participants converge from a large set of brainstorming ideas to a clearly understood, non-redundant, thorough list of ideas from the initial set that team members deem worthy of further attention.

Preparation.

You need the following information to execute a FastFocus:

  • An initial set of ideas from an earlier generation activity like FreeBrainstorm.

  • Possibly a structure for the formulation of each public idea that will be extracted using the FastFocus. For example, if the group is working on action items, the required structure for the public ideas may be that each idea starts with a verb.

Your team also needs the following capabilities:

  • One or more electronic pages, each containing a subset of the initial ideas. Thus, all initial ideas are distributed across multiple pages, so that each group member can start with a different set of ideas.

  • A way to display or share a public list of the ideas that group members deem to be worthy of more attention, i.e. the final idea list.

  • In ThinkTank, a FastFocus can be set up as follows:

  • If the starting ideas are not the result of a FreeBrainstorm, set up the Brainstorming module with as many discussion sheets as there are participants. Evenly distribute the ideas over these sheets, ensuring that comments to specific ideas are assigned to the same sheet.

  • In the Brainstorming module, create one additional discussion sheet, called “Final List”.

  • Configure the module such that participants can only read ideas on each discussion sheets. They should not be able to add, edit, or delete sheets or ideas.

Execution Rules.

The key execution rules for leading a FastFocus activity are as follows:

  • Participants are assigned a discussion sheet where they see a subset of the ideas from the initial set.

  • Participants take turns proposing ideas from the initial set for inclusion on the public list.

  • Participants can only propose one idea during each turn.

  • All participants get a turn in round-robin fashion. When the last participant’s turn is over, participants are assigned another discussion sheet and the process starts with the first participant again.

  • A participant may pass if (s)he has no idea to propose for inclusion on the public list.

  • Participants may not debate the merits of the ideas proposed by others for inclusion the public list; participants may only debate the meaning of ideas.

  • Only the facilitator may add ideas to the public list.

  • The facilitator must add ideas proposed by group members, subject to the following constraints:

  • Any given idea my only appear on the public list once.

  • Vague or wordy ideas must be re-expressed clearly and concisely by the facilitator, and the facilitator must check that participants understand and concur with the new expressions before they are added to the public list.

  • Ideas not relevant to the purpose of the activity must not be added to the public list (e.g. potential solutions may not be added to a list of problem statements).

  • Ideas that are too general or too specific for the task at hand must be reframed at a useful level of abstraction before they are added to the public list.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

de Vreede, GJ., Briggs, R.O. & de Vreede, T. Exploring a Convergence Technique on Ideation Artifacts in Crowdsourcing. Inf Syst Front 24, 1041–1054 (2022). https://doi.org/10.1007/s10796-021-10120-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10796-021-10120-0

Keywords

Navigation