Keywords

1 Introduction

Over 2 billion people worldwide have different types, degrees, or combinations of disability, literacy, digital literacy or aging related barriers that impede or prevent them from using ICT [1]. Society cannot afford to have this cumulatively large percentage of people offline, yet there is no way to reach them with the current model. Thus all-inclusive ICT that can be used from all different users is an urgent need.

To address this need, a new model is under preparation, and is called Global Public Inclusive Infrastructure (GPII) [1]. GPII is an infrastructure that will be able to support the usage of ICT by all, including end users, developers and stakeholders. GPII is an umbrella under which many different projects are running to compile its vision. In this paper we will focus on the evaluation of the all-inclusive ICT infrastructure GPII is building in Cloud4all EU project [3].

Cloud4all focuses on creating instant and ubiquitous auto-personalization of interfaces and materials based on user needs and preferences (N&Ps), so as to deliver accessibility to every individual where they need it, when they need it and in a way that matches their unique requirements.

2 Methodology

2.1 Cloud4all Methodology at a Glance

Cloud4all evaluates its all-inclusive ICT using the user centered design (UCD) approach of ISO 13407, with continuative evaluation phases, where users test specific Cloud4all prototypes. The results of the tests return to the developers who use them in order to evolve their tools/solutions. Then, the updated tools/solutions are tested again and the results return to the developers, and so on. This loop targets on creating tools/solutions that fit the needs of their user, whoever these are; end-users, developers or stakeholders.

The evaluation of Cloud4all is realized in three iterative phases. Each iteration phase has different objectives which depend upon the tools’ functionalities, available at the time of the iteration, as well as, the general needs of the project. In addition, in some cases, tools that have been tested at one phase may be tested also at the next phase/s, since additional functionalities will be added to them or the objective of the evaluation phase may vary and new conclusions (additional to the ones of the previous phase) may derive for the respective tools.

In each evaluation phase, the scope of the testing has been based on assessing the usability and the user experience of the whole system, as well as its components separately. In each evaluation phase though, the way which the features will be used and tested is going to be different, depending on the maturity of the different components that participate in the developed scenarios. Thus, across the iterative phases, different solutions will be tested with users (end-users, developer, stakeholders) in different scenarios, at different level of maturity (Mock-up, LoFi, MeFi and HiFi prototypes). Thus, complexity and diversity of the tested tools/solutions will characterize all steps of the development work and this will be reflected in the evaluation framework.

2.2 Methodology for Piloting the Evaluation

Piloting the evaluation testing gives research programs an opportunity to make revisions to instruments and data collection procedures to ensure that appropriate questions are asked, the correct data will be collected and the data collection methods will work [6]. Additionally, this has the potential to help researchers identify ways to improve how an instrument is administered. For example, if participants show fatigue while interacting with the solutions, then the researcher should look for ways to shorten the solution or change the device, or even the experimental planning. If respondents are confused about how to perform a task, then the solution needs to clarify the possible interaction and simplify this process.

Thus, in Cloud4all, in all 3 evaluation phases, the testing involves conducting a preliminary test of data collection tools and procedures to identify and eliminate problems, allowing researchers to make corrective changes or adjustments before actually collecting data from the target population. In Cloud4all pilot tests, 15 participants are asked to go through the whole study, so that we can learn about the process and correct any problems [3].

A typical pilot test involves administering instruments to a small group of individuals that has similar characteristics to the target population, and in a manner that simulates how data will be collected when the instruments are administered to the target population.

3 Evaluation with End Users

3.1 Problem Statement

There are a number of key problems affecting the access of certain user groups to assistive technologies and specialised accessibility features that are being addressed by the Global Public Inclusive Infrastructure (GPII) [7]. Cloud4all works directly in two of these problems:

  1. 1.

    Solutions are too complicated, being difficult to find, set up and adjust, especially when the systems should be used by different users.

  2. 2.

    Solutions don’t work across all of the devices and platforms that users encounter in education, employment, travel, and daily life.

Thus, the evaluation of Cloud4all focuses on assessing the greater picture of Cloud4all and trying to evaluate the whole procedure of Cloud4all system and the auto-configuration of preferences. The aim is to find weak points to fix before the finalization of the project, as well as to define issues for future developments in order to create a seamless and flawless interaction process for the user.

Thus, the problem statement for which Cloud4all is providing a solution could be summarized in the following scenario:

A user is trying to use a device that is not configured based on their Needs and Preferences. Then the user is trying to use a device which is configured based on their Needs and Preferences. The user cannot use the device which is not configured based on their Needs and Preferences, but they can use the device which is already configured before, based on their Needs and Preferences.

The user cannot configure the device to fit their Needs and Preferences, either because they cannot (due to their disability) or because he/she does not know how to. It would be easier for the user if there was an automatic mechanism, which he/she could understand, with which they could easily login to the device they want to use and it would be automatically configured. It would also be easier for the user if there was a tool that would allow them to create an account and set their own Needs and Preferences than visiting the settings of each solution they want to use and tweak them manually.

Users with disabilities are pledged to the environment in which they use the different solutions/devices. The users cannot use the devices they want under specific contexts. A mechanism that would allow the solution/device to automatically change based on the user’s Needs and Preferences would enhance the user’s interaction.

Thus the users’ problem is twofold. On the one hand they cannot use solutions that are not configured based on their needs and preferences and on the other hand most of the users cannot configure these solutions at all, either because they are not aware of their needs and preferences or because they are not aware how to change the solution settings to match them.

3.2 Evaluation with End Users Objectives

Each of the three evaluation phases of Cloud4all had different objectives based on the maturity of the tools and the information the developers needed to extract from the users to evolve their developments. Below, the objectives of each evaluation phase are presented. Going through the following objectives of each iteration phase, the evolvement of Cloud4all evaluation tests is conspicuous.

  • 1st evaluation phase of Cloud4all

    • Introducing the concept of Cloud4all to users and getting their general reaction and early input.

    • Presenting the ability of the basic infrastructure to automatically launch and set up access solutions for users according to their preferences.

    • Realizing very early preliminary testing of matchmaker technologies for products that the user has not specified any preferences yet, in Windows and in Linux environment.

    • Compare the results of the rule-based matchmaker with the results of the statistical matchmaker. The matchmakers are the features that provide the intelligence of the Cloud4all system, by matching with each other all the different components that participate in the procedure.

    • Compare the results of the rule-based matchmaker and the results of the statistic matchmaker with experts.

  • 2 nd evaluation phase of Cloud4all

    • Identify how Cloud4all will foster digital inclusion by improving user experience, in comparison to the current way of performing a common task in different, familiar or not, non-personalized solutions.

  • 3 rd evaluation phase of Cloud4all

    • Evaluate the user experience with the Cloud4all auto-configuration procedure.

    • Evaluate the improved use of different devices and solutions.

    • Evaluate the acceptance of the context-related changes functionality.

    • Evaluate the management of Needs and Preferences with the PMT.

3.3 Evaluation with End Users Scenarios

As the objectives of each evaluation phase evolve, the scenarios the users test are evolving too. Thus in the 1st evaluation phase a guided scenario was realized by the users where they had to set their preferences using a simplified tool. The preferences were captured in the form of application specific settings, which means that the facilitator noted the value that each specific setting of the specific application under evaluation had, after explaining it to the user and defining the effect of this setting to the entire solution interface. The scenario was that the user would identify his/her preferences first in Windows and then go to Linux and evaluate how the settings were inferred in that OS. Then the user would have to do the procedure vice versa, identify the preferences in Linux and then go to Windows and evaluate how the settings were inferred in that OS. Thus we had two different set of settings to compare. One set of settings the user defined in Windows (Token A) and compare them with the inferred settings from Linux to Windows (Log A). And a second set of settings the user defined in Linux (Token B) and compare them with the inferred settings from Windows to Linux (Log B). The aforementioned procedure is depicted at the figure that follows (Fig. 1).

Fig. 1.
figure 1

Cloud4all 1st evaluation phase. Auto-configuration scenario

Moving forward in the second evaluation phase, the developments evolved and so did the procedure. The users no longer used application specific settings to define their N&Ps, but used the Cloud4all Preferences Management Tool (PMT), which uses common terms. Common terms are terms that harmonize the application specific settings and values throughout all the applications used in Cloud4all. Thus, the users now had a more user friendly tool to use in order to define and explore their N&Ps which were now captured in the needs and preferences server and retrieved from there. Additionally the users in the 2nd evaluation phase had the possibility to navigated between different devices and not only Window and Linux. Thus, android OS, Java mobile phones and other Cloud4all applications were made available to them. The procedure, which is depicted in the following figure, asks the user to use the PMT in Platform A to define their N&Ps and create a token and use this token to login to Platform B and evaluate the inferred settings (Fig. 2).

Fig. 2.
figure 2

Cloud4all 2nd evaluation phase. Auto-configuration scenario

Finally at the 3rd evaluation phase, which is the last evaluation phase, more naturalistic scenarios will be evaluated. The users will be given a set of devices and applications and also a token which will be set by the pilot facilitator based on their disability profile. The users will be asked to navigate to these different devices and applications as if they were in their own environment and validate the auto-configuration procedure and results. The applications used will be close to ones the users use in their everyday life, including TV, laptop, desktop, tablet, ticket vending machine and the evaluation will be realized in controlled, close to reality environments like domotic labs or user own environments.

4 Evaluation with Developers

4.1 Problem Statement

Apart from the tools for the end users, Cloud4all also develops tools to be used by different types of developers who may benefit from the use of Cloud4all. These developers are mainly AT developers who can add additional accessibility features to their solutions only by incorporating them in Cloud4all and GPII framework.

One of the main objectives of GPII is “to provide the tools and infrastructure needed to allow diverse developers and vendors to create new solutions for these different users and platforms and to easily and cost effectively move them to market and to users internationally” [7].

The Cloud4all project has been trying to assist the developers throughout the whole process of integrating their solution to Cloud4all/GPII infrastructure. The greatest existing problem is that there is no automatic way for the developers to include their settings to the Cloud4all/GPII unified listing that has been created in Cloud4all.

4.2 Evaluation with Developers Objectives

Cloud4all provides various channels that can be used by developers in order to work with Cloud4all and GPII. A key solution for developers provided by the Cloud4all project, which was ready from very early stages of the project, is the Semantic Alignment Tool (SAT), a software module for the syntactic and semantic analysis of solutions, by taking into account the different adaptation dimensions of the offered Cloud applications, services and tools.

An early prototype of the SAT was tested during the 1st iteration, showing that its usability was below the project expectations and that additional efforts should be allocated for its improvement. Therefore one of the objectives of evaluation during the 2nd evaluation phase was to test that the usability of the solution has been improved, fostering the acceptance of the Cloud4all concept and GPII framework by developers.

The results of the 2nd evaluation phase appeared to be quite positive for the SAT, thus in the final evaluation phase the objectives have been broadened and the objective of the evaluation is more high level, assessing all the material that has been created for developers within Cloud4all including the developers’ kit which encompasses the following:

  • Guidelines about the installation of Cloud4all.

  • Guidelines about testing the Cloud4all/GPII architecture.

  • Guidelines about the integration of a new solution to Cloud4all/GPII

    • When the solution runs on a specific platform.

    • When the solution is web-based.

  • Additional information like

    • GPII source code

    • Blog for the developers

    • Cloud4all wiki

    • Cloud4all/GPII issue tracker.

5 Evaluation with Stakeholders

5.1 Problem Statement

The Stakeholders’ profiles in Clou4all range from governments to service providers, caregivers or end users’ experts, encompassing financing organizations, AT ICT industry organizations, technology oriented organizations, governmental and legal organizations, service providers and end-users organizations. This heterogeneous group has different points of view and face different problems when dealing with ICT accessibility delivery.

The main problem with the various stakeholders groups is that most of the time even if they are totally aware of the needs and preferences of individuals with disabilities, they are not aware on how to accommodate them. Stakeholder involvement in Cloud4all has allowed us to gather ample material and knowledge about user needs and preferences, as well as to pinpoint some challenges that we face throughout the lifecycle of the project, such as the need to make simpler configuration processes and/or to make users aware of the built-in accessibility features of different products and services.

5.2 Evaluation with Stakeholders – Objectives and Plan

Since the stakeholders group in Cloud4all is such a broad one the need of the identification of the needs of each subgroup was revealed early in the project. In each step of the evaluation different subgroups have been participating providing different, but in a lot of cases similar results.

At the first evaluation iteration only end-users organizations participated. Participants with different profiles of disability have been involved in the evaluation; however, as the end-users provided a personal view of their problems and preferences, it was important to involve also expert representatives from end-users organizations in order to gather as well a wider overview and experience over the needs and preferences of the different groups of disabilities. To that end, expert representatives from organizations of elderly people, visual impaired users, people with learning difficulties and cognitive impairments, low literacy and people with dyslexia formed the panel of stakeholders in the 1st evaluation iteration of Cloud4all.

As the project evolved in the 2nd evaluation phase, stakeholders with different profiles (governments, service providers, caregivers, end users’ experts) have been involved in qualitative data gathering, to complement the end users’ and developers’ evaluations.

The goal of the stakeholders’ participation in the 2nd evaluation phase was to gather their impressions and qualitative feedback on the concepts, tools and the whole Cloud4all process from a different perspective. Therefore, the evaluation with the different stakeholders was explorative and not guided by a specific research question. This has been achieved mainly by the participation of stakeholders in structured focus groups organized around concrete research topics like the Cloud4all/GPII concept and auto-configuration scenario, the context-related changes scenario, the optimum N&P gathering scenario, the GPII marketplace and recommendation system scenario, etc.

Finally, during the 3rd evaluation phase a more focused feedback is needed from the stakeholders. For this reason, stakeholders participating in this evaluation sessions will be selected among experts related with end user organizations as well as AT providers and caregivers. Stakeholders with a more industrial vision, such as ICT vendors and software industry, policy makers, etc. will be involved in Cloud4all through the Open Days that will be planed and realized later on in the project, where Cloud4all will be demonstrated and existing applications and tools may be experienced.

Thus, the goal of the stakeholders’ evaluation in the 3rd evaluation iteration is to gather their impressions and qualitative feedback on the concepts, tools and the whole Cloud4all process (including installation) from two different perspectives:

  • Which is the usefulness for users? (from stakeholders’ point of view), and

  • Which is the usefulness for stakeholders? (Being the stakeholders as mediators/supporters of end-users organizations or AT providers).

Stakeholders will be involved in structured focus groups where the whole Cloud4all concept (from installation to use on different devices) will be presented. Based on this, stakeholders will participate on discussions around the following concrete research topics.

6 Conclusions

In this paper, we presented the evaluation framework developed for all 3 evaluation iterations of all-inclusive ICT with end users, developers and stakeholders in the scope of Cloud4all/GPII. Each user group has been treated as a different part of the evaluation, being all though under the general umbrella of the evaluation and evolvement of the all-inclusive ICT infrastructure of Cloud4all and GPII.

In each evaluation phase different objectives and research questions have been sketched in order to serve the needs of the project in each stage. These have been evaluated using different scenarios for each user group and for each evaluation iteration going from more simplistic to more mature and complex ones.

Starting from the 1st evaluation iteration where the participants had in their hands mainly mock-ups and only some low fidelity (Lo-Fi) prototypes and were able to assess only basic functionalities of restricted Cloud4all tools being fully guided from the facilitators, we moved to the 2nd evaluation iteration where the participants had a full range of medium fidelity (Me-Fi) Cloud4all functional prototypes and were able to go through the whole Cloud4all experience from setting their own preferences to viewing who these preferences are inferred to a specific set of devices. And finally being on the 3rd iteration where the users can transfer their set of needs and preferences throughout a vast number of solutions in a very close to reality, unguided scenario and environment.