Keywords

1 Introduction

The Citizen Science AssociationFootnote 1 defines citizen science as “involvement of the public in scientific research – whether community-driven research or global investigations [1].” A powerful value proposition for citizen science is that programs are able to achieve multiple outcomes, including both science and public engagement goals [2]. From a science perspective citizen science can help generate datasets over large geographic and temporal scales; speed up field detection, and enable the classification of large datasets [2]. These benefits are exemplified through the discovery of significant scientific results associated with citizen science projects including documenting range shifts [3], assessing vulnerable species [4], effects on water resources, species management [5] and disaster and conflict resiliency [6]. From an education and engagement perspective citizen science has been shown to advance both individual and societal outcomes including environmental stewardship [7], community capacity [8], environmental justice [9], and co-production of knowledge and practice [8].

The role of volunteers in citizen science projects can be diverse, from citizens contributing field observations, or sorting or classifying images from their home computers, or identifying and addressing a local issue of concern [1, 10,11,12]. In addition, citizen science projects can range in scale from a local conservation challenge (e.g., pollution in a local water body) to global in scope (e.g., tracking monarch butterflies across North America) [13, 14].

In the last two decades, citizen science approaches to gathering biodiversity and natural resource information have proliferated [15, 16]. Researchers have identified three factors leading to the recent proliferation of citizen science programs: (1) the evolution and accessibility of technical tools to improve communication, dissemination of information, and data collection; (2) appreciation by professional scientists and others that the public represents a valuable source of labor, skills, computation power, and funding; and (3) the increased value realized by improving the public’s understanding of research and monitoring through engagement in the scientific process [17].

In this paper we explore the role of technology and more specifically the contribution of HCI expertise in enhancing program design in two citizen science programs.

2 Role of Technology in Citizen Science

Low cost user friendly technology has enabled global participation in citizen science programs which continue to expand in an effort to engage the public in biodiversity research [18]. In 2016, 90% of North Americans have access to the internet and 67% of Canadians and 72% of Americans own a smartphone [19]. In addition, it has been estimated that by 2020 80% of people globally will own smartphones [20]. Smartphone applications (mobile apps) developed for citizen science programs enable real time reporting of observations, with functionality such as recording sound, taking photos and video and pin point locational accuracy using global positioning systems [21, 22].

Supportive applications built for smartphones, computers and other emerging technology together have the ability to engage a large number of volunteers, improve data collection, control data quality and usability of the data [23], all important variables of citizen science programs. Engaging a large number of volunteers is exemplified by the success of projects on the Zooniverse on-line citizen science portal, where the Galaxy Zoo project enlisted volunteers who collectively classified 50 million galaxies within the first year of operation. Snap Shot Serengeti, another Zooniverse platform, saw 28,000 volunteers classify over 1.51 million remote camera photos [11, 12]. Technology has improved data collection experience for the user through the building of streamlined interface. Fold it, a collaborative citizen science gaming application is responsible for the identification of the structure of a protein integral to AIDS [24]. This virtual citizen science approach greatly increased the speed at which this scientific discovery was made, contributing significantly to the scientific process. Data quality can also be improved using smartphone applications. For example, some OPAL (Open Air Laboratories) projects include a requirement to upload photos to help with verification of observations [25]. OPAL includes volunteer engagement in environmental research in the United Kingdom. Lastly, data sharing and usability of data can be greatly improved, for example Project Budburst developed a smartphone app where participants monitor phenology (observing seasonal changes such as plants leafing, flowering and fruiting) and can upload photos with notes and information on the observations creating an on-line community [26]. All of these examples rely on technology and usability of the technology.

The use of HCI in citizen science has been limited due to inadequate interaction between citizen science practitioners and HCI specialists [27]. Preece identified HCI as having a key role to play in supporting citizen science but examples of collaboration are limited. Frameworks to support the design and implementation of citizen science do not include specific reference to usability testing for data collection, data usability and communication applications [16, 28].

The usability of a citizen science application relies on the development process. Developers and information design specialists should be engaged early in the process and feedback should be obtained from end-users. A recent review of nature based applications on Google Play found 6031 nature based applications of which 33 supported citizen science. A review of the applications concluded that no application captured the full potential of capabilities available and most failed to capture the public imagination [29]. This example highlights the need for HCI specialists to have an increased role in citizen science program development. Newman et al. highlight

“…volunteers require simple features that are within their cognitive access. They need to experience initial easy success. Once successful, they may explore complex questions in more depth and have patience for more complex user interface designs and features…” [30].

Examples exist of research exploring methodologies and tools to enhance participation in research and emphasize the importance of the development phase for smartphone apps [21]. Newman et al. provide a comprehensive discussion of their User-Centric Design (UCD) approach to the development of the geospatially enabled www.citsci.org website [30]. They detail the ‘iterative investigation, design, requirements specification, development, implementation, testing, and maintenance’ development cycle with particular emphasis on the task-based usability evaluation with 16 participants. A further 10 participants were engaged in post-production feedback. The authors reported that participants had difficulty with complex tasks, especially if they were unfamiliar with map features and navigation. Participants also pointed out the need for integrated tasks between citizen scientists and volunteer coordinators [30]. Ease of use and simplicity were also identified as important for adoption. This example highlights the importance of integrating HCI with citizen science to optimize the utility of smartphones and websites to advance citizen science initiatives.

2.1 Usability Evaluation and Reporting

There are multiple opportunities to solicit end-user feedback during the development life-cycle. Tullis and Albert [31] identify formative usability as evaluation activities that attempt to identify design issues and improvements before release and frequently make iterative changes with feedback from smaller numbers of participants. They identify summative usability as evaluation activities that allow the developers to assess performance or yield comparison to identified goals or expectations [31]. Figure 1 shows how these evaluation approaches can overlap during a development cycle. A selection of usability test methods (in green text) has been included in this diagram to highlight how different usability tests can be integrated into product development. For example, rapid paper prototyping might be carried out with a few participants to guide interface design, while performance testing is done with many participants to measure effectiveness, efficiency and satisfaction later in the development cycle. Development teams also recognize that they need to be strategic with their investment in usability testing as there are usually limited resources and tight schedules associated with any project. Guidance on the type of evaluation approaches can be found in many user experience textbooks and the ISO/IEC 25061 standard [32].

Fig. 1.
figure 1

Overlap of formative and summative evaluation during a product development life-cycle. (Color figure online)

An important consideration for any team should be the reporting of usability evaluation activities. Excellent guidance can be found in the ISO/IEC 25066 standard [33]. The reporting should be appropriate for the type of usability test activities and participant engagement, and should be communicated to stakeholders accordingly. Little has been published in the Citizen Science-HCI literature on the advantages of usability evaluation and reporting on the overall adoption of these applications by contributors and scientists. This paper highlights two case studies that integrated usability testing in unique citizen science programs focused on biodiversity conservation. In both cases the results from the usability testing led to significantly improved applications, ultimately improving the quality of data collected, actual use of data, and sharing of information.

3 Call of the Wetland Case Study (Formative Evaluation)

Call of the Wetland is an urban citizen science program to be launched in April 2017 that enables the public to survey wetlands and report their observations of amphibians, tadpoles or eggs, record amphibian calls and submit photos of amphibian species. The program was developed with two goals: (1) to document where different amphibian species are occurring in the City of Calgary, Alberta, Canada to support biodiversity planning, and (2) to engage the public in wetland conservation by visiting and studying amphibians in urban wetlands. Call of the Wetland has been developed by a team of specialists with a diversity of backgrounds including biologists, communication, information technology and HCI.

Program design elements were informed by a user needs assessment workshop with program collaborators, where program goals, objectives and measures of success were outlined. The user needs assessment laid the foundation for program development and outlined design elements and functionality of collection tools and program website and mobile application. The primary method of data collection is through a Call of the Wetland smartphone application. HCI specialists contributed to the design of the usability test protocol, the recruitment process and the presentation of results in a report format.

Call of the Wetland participants contribute by reporting their observations through an open source smartphone application developed specifically for the program (Fig. 2). Participants are able to fill in details regarding their observation through a wildlife observation form as well as uploading photos or recorded sound files of amphibians calling.

Fig. 2.
figure 2

Call of the wetland smartphone application pre-testing.

3.1 Usability Method

A high-fidelity prototype was developed as proof of concept and to fully scope out data collection and work flow. It was important to get early feedback on the prototype not only to identify design improvements, but also to gain some first-hand experience with potential end-users. Two test protocols were developed: (a) moderated think aloud prototype exploration, (b) unmoderated field evaluation. Program partners, interested participants and volunteers were recruited for these evaluation activities.

The moderated concurrent think aloud usability test included (1) participant consent and study purpose (script), (2) participant demographics/technology experience, (3) moderated tasks and difficulty assessment (identify amphibian, record a sound, submit a photo, ease of use scenarios) and (4) participant experience (frequency likelihood, assistance/learnability). Test participants were recruited from a diverse pool ranging from researchers, environmental specialist, and public volunteers.

The goal of these usability evaluations was to surface design issues and rapidly iterate improvements. Participant feedback and experience was also valuable as the initial test activities were carried out with a broad user base. A number of open-ended questions allowed for participants to express their experiences in their own words.

3.2 Results

Moderated Evaluation

Fourteen participants, including program partners and volunteers likely to participate in the program were included in the usability testing. Information was collected on participant occupation, age, comfort with technology, and amphibian knowledge. The majority of participants (78.6%) indicated that they were very comfortable using a tablet/smartphone prior to using the app. In addition, 57.1% of participants indicated that they were not familiar with amphibians and their habitats prior to the testing.

Participants were asked to identify the ease of use for a number of functions, using the amphibian wildlife identification guide, submitting an observation, uploading a photo and uploading a sound file. For example, half the participants reported having trouble navigating the steps between uploading the audio/image and submitting it (Fig. 3), indicating adjustments were needed.

Fig. 3.
figure 3

Participant ease of use reported by task.

The resulting data allowed the researchers to identify a number of key issues such as, improvement to the amphibian identification page, ambiguity in the application guidelines, challenge to upload a sound and/or photo process, verification of file uploads, and reporting of no sightings. The qualitative responses also provided excellent feedback on application ambiguity, learnability and overall design.

Unmoderated Field Evaluation

Four participants completed the unmoderated usability test protocol. This allowed respondents to provide input on using the application for opportunist observation and systematic assignments. The self-reported comments identified similar issues to the moderated session, but also included possible issues/enhancements such as adjustments to the audio recording instructions, better understanding of how a participant corrects submitting errors, and ability to record the wetland survey number they have agreed to survey.

A summary consolidated report was prepared at the end of the testing phase. This report included recommendations on how to address identified issues. The report was shared with project partners and changes to the smartphone application were then prioritized.

3.3 Impact of Report

Following the usability testing, the smartphone application underwent major adjustments based on recommendations (significant overhaul of the amphibian identification guide, photo and sound upload process, registration process and ability to report no sightings). The changes significantly improved the data collection experience for users. The smartphone application will be released this spring. Further evaluations are planned to assess application performance.

4 Wild Watch Case Study (Summative Evaluation)

Wild Watch was created in partnership with Cenovus Energy and Shell Canada to increase employee awareness of wildlife stewardship and to generate a real-time spatial digital dataset of wildlife observations and human-wildlife interactions to inform industrial site wildlife management plans. Wild Watch participants (employees and contractors working at northern industrial sites) use a smartphone app and/or online mapping tool to enter their wildlife observations.

Historically, both industrial partners supported wildlife reporting programs where participants reported observations via hard copy forms. The programs were underutilized, and the data was ultimately deemed unusable as locational information was poorly reported. The growth and success of citizen science programs and role of technology provided an opportunity to improve multiple aspects of the program including data collection, data management and usability of the data.

Key features of the Wild Watch program include streamlined data collection; improved location accuracy of observations; reporting on human wildlife conflicts and other activities of interest (on infrastructure, on road, in garbage or wildlife vehicle collision); alerting environmental staff when a human wildlife conflict or endangered species are reported, providing access to real time data that can be searched by species, activity of by date; and automated upload into government wildlife reporting form. These features are all dependent on human interacting with the Wild Watch program via the project website (Fig. 4), online mapping tool and smartphone application.

Fig. 4.
figure 4

Wild Watch web interface and smartphone application.

HCI specialists played an important role in the development phase and undertook usability testing and recommendation on the smartphone application and online mapping tool (Fig. 4). The program has been running since 2013 and annual reviews identify adaptations to improve the user experience. The program continues to evolve as staff and participants interact with the site and find ways to improve the experience.

4.1 Usability Methods

A two-stage test protocol was developed: (1) moderated think aloud task performance evaluation of website and a self-rated experience using a SUS questionnaire, and (2) moderated think aloud task performance evaluation of application, self-rated experience using a SUS questionnaire, and a session debrief with participants.

For both the website and app, the participants were asked to undertake the following tasks: (1) register for an account (Website only), (2) add an observation (information provided), (3) upload a photo (photo provided), and (4) find information about moose. Test participants were recruited from Cenovus employees likely to be using this application.

4.2 Results

Figure 5 summarizes task (listed above from 1–4) success for six participants performing with both the web (W) and mobile application (A). While all participants completed the four tasks, minor problems were noted.

Fig. 5.
figure 5

Participant task success (minor indicates a task completion with some minor issue)

Only task success was recorded. Some learnability could result in the higher success rates for the application over the website. However, with three of the participants having some minor problems with task 2, (adding an observation via the mapping tool) this pointed the development team to particular areas for improvement.

Figure 6 offers insights to the participants’ impression of their satisfaction with their experience. The advantage of using a standard instrument such as the SUS questionnaire is that it allows you to compare with other reported SUS scores. A SUS score in the 50–70 range as being marginal, SUS above 70 indicates acceptable level [31, 34]. The high average SUS score is an excellent indicator of the website and application perceived ease of use.

Fig. 6.
figure 6

Six participant SUS scores for Wild Watch web and smartphone application.

This combination of task success and SUS scores supports the development effort, allows for future comparison and is a good indicator of the likely acceptance of the application. A comprehensive usability report was provided with an executive summary identifying positive findings, areas for improvement and associated recommendations. These were separated by website and application. The report also included a full description of the test method, moderator scripts, tasks and questionnaires used. Collected data was collated and presented in a way that other researchers might easily follow. This approach suggests that the test protocol was executed consistently and could easily be duplicated or reused.

4.3 Impact of Report

The report was shared with program partners, recommendations were discussed and where appropriate incorporated into the website mapping and smartphone applications prior to release of the program. Major adjustments focused on changes in language, placement of key features and changes to design of functionality components such as registration and locational pin for marking observations on the mapping tool. Wild Watch launched in 2013, and since this time annual program evaluations (interviews and surveys) have identified additional improvements to the usability of data (improved search functions and display of data on mapping tool), and development of environmental email alerts (i.e. species at risk reports). The Wild Watch program highlights the value of the HCI specialist role in enhancing design and functionality of the developed applications, ultimately improving program success.

5 Key Observations from Integrating Usability Testing into Development

The HCI specialist or user experience designer plays a crucial role in the successful adoption of citizen science applications. There are lots of usability test methods possible and the development team needs to identify what methods provide the maximum benefit that consider project resources. The test methods and resources allocated will depend to some extent of the complexity of the application and the legacy products in use. Development teams should pay particular attention to the recruitment of test participants. Formative test activities can provide early feedback on design issues with small numbers of participants. Where demonstrated performance targets are set, then a larger pool of participants will be necessary (for example, if you want to report significance levels when comparing two designs).

While usability testing can add to the work load of a development team (preparation, planning, execution and reporting), the advantages are many. A well-defined test protocol can be reviewed and consideration given to the demands on the test participant as well as the integrity of the data collected. In effect, the team can assess the ethical quality of the test protocol and ensure that appropriate professional standards are met. Efforts made to recruit end-users in testing allow the team to demonstrate readiness for product release. The resulting usability test report is an excellent vehicle for communicating to stakeholders. These reports provide guidance on the prioritization of ongoing development, and also engage the development team with end-users. Since recommendations are evidence-based, it is easier to reach consensus on development priorities. The captured data also provides baseline data against which these future changes can be measured.

These case studies did not have a usability research question to be examined. However, their goal was successful adoption with accurate, meaningful data collected in the field and then the analysis and use of the resulting data to help with environmental management decisions. The ubiquitous availability of mobile technology offers fantastic potential for data collection. However, ease of use will be essential to adoption of citizen science applications and usability testing is integral to a successful development process.

5.1 Areas for Future Research

Some potential areas of applied research include: planning and stakeholder engagement models, recruitment strategies and the influence of incentives, change management and consensus decision making during development, stakeholder communication through usability reports and long-term assessment of engagement through diary studies and usage reports.