1 Introduction

The concept of Health Information Systems (HIS) comprehends the widest range of information technologies used in healthcare systems to assist healthcare organisations in order to gather and process data, as well as disseminate information. This type of systems have the potential to increase efficiency and, at the same time, saving considerable amounts of health expenditure [1]. Actually, HIS incorporate a variety of different types of systems, such as: patient information systems, clinical information systems, clinical decision support systems, administrative systems, radiology information systems, pharmacy information systems, laboratory information systems, hospital information systems, among others aimed for information management [2].

Given the diversity and complexity of HIS, and taking into account that the goal of this type of systems is to improve the clinical performance and patient outcome, assuring the quality, effectiveness and efficiency of health services, a rigorous evaluation to get the most benefits out of an HIS is extremely important.

However, the HIS evaluation is a difficult process due to the complex nature of the health care domain and the objects to be evaluated [3], as well as the comprehensiveness of the concept of evaluation itself [4]. The evaluation of HIS involves the assessment of the application and its impact on the organizational environment in which it is implemented, determining the systems effectiveness and efficiency, level of user’s satisfaction, the systems’ usability level, as well as the weaknesses and strengths of these systems.

Regarding the evaluation itself, there are different evaluation perspectives, taking into account different motivations and involving different stakeholders. The very definition of HIS evaluation differs in the literature, according to the focus given to the study. Some evaluation approaches are focusing on economic criteria [5,6,7], and others on user-oriented criteria [8,9,10].

One of the definitions that meets the most consensus is presented by Ammenwerth et al. [11] that defines HIS evaluation as “the act of measuring or exploring properties of a HIS in planning, development, implementation, and operation, the result of which informs a decision to be made concerning that systems in a specific context”. Another important study about HIS evaluation that tries to address some questions (who, what, how, when and why) related with evaluation activities and that integrates technological, human, social and organisational issues, was presented by Yusof et al. [12]. According to theses authors, the evaluation seeks to answer five questions: why (objective of the evaluation); who (stakeholders and their needs and perspectives); when (phase in the system development life cycle - SDLC); what (focus of evaluation); and, how (methods used in evaluation).

Despite the large body of literature on HIS evaluation and the emergence of different guidelines to reduce the complexity of the evaluation of HIS, the reality shows difficulties related to these evaluations, particularly in terms of the methods to be used and phases of the SDLC in which they should be applied.

Thus, this work aims to explore, based on the literature review, the main methods of HIS evaluation to support the development, identifying in which stage of the SDLC which methods can be applied. It is intended that this work covers the whole life cycle of HIS with emphasis on the formative process, from the requirements analysis to development. Additionally, this work discusses the reasons for the evaluation of such systems, and the issues discussed will be illustrated using two real case studies on the implementation of HIS, in which some of the methods were successfully applied.

2 Theoretical Background

2.1 Health Information Systems and the Importance of an Evaluation Process

The role of HIS in medical practice has changed significantly over the past 5 decades. Initially, and as in other types of sectors, the technologies were developed to support the operationalization of administrative functions, increasing work efficiency and reducing operative costs. Currently HIS are used to support any area in the health organisations, with a particular relevance in the management of the patient’s clinical information, thus having a great impact on clinical practice and on the communication between healthcare providers and their patients [13].

Nonetheless, for an HIS to reach its maximum potential, it should: (i) be designed and implemented effectively, (ii) be accepted and properly adopted by its potential users, (iii) benefits the system where it is inserted, taking into account the purpose for which it was developed. In this context, the evaluation of HIS becomes an extremely important activity, as it allows to measure, characterize and predict the level of success of the HIS in the context of clinical practice. As in any other area, it is difficult to know the real benefits and the impact of an IS in its environment, without going through an evaluation process.

In the real world, the evaluation process by itself is quite comprehensive, and may have different objectives, be driven by different interest groups or even be supported by different theories. In practice, and taking into account the vast literature on the subject, there are different types of evaluation, which can be classified into one of two major groups: (i) a process that ensures that the product under evaluation meets the requirements and needs of information, satisfies the organizational objectives, is free of bugs, resulting in a functional product, usable, and well accepted by potential users [9, 10]; (ii) a process that allows determining the impact of the product in the sector, thus providing a set of benefits to the environment where it is inserted, measured in terms of costs, quality of service, work efficiency and patient safety [14,15,16].

Although these evaluation processes have a relationship of dependency (since the result of the former will in some way influence the result of the latter), evaluation studies are often conducted by different interest groups, sometimes even in different research disciplines. In the first case we can find this type of studies in areas related to Software Engineering and Usability Engineering. In the second case, this type of study is often conducted by researchers in the areas of Social Sciences in order to understand certain economic, social or cultural phenomena.

2.2 System Development Life Cycle Focused Evaluation

The HIS are complex systems [3]. However, given the impact these systems have on a healthcare context, they are classified into the group of systems with ‘zero tolerance’ to failures. Thus, SDLC focused evaluation emerges as a powerful tool to be used, not only to minimize the possibility of failure, but also to ensure that technologies are enabled to fulfil their potential in improving care, reducing cost and increasing efficiency [17]. Evaluation allows to understand how and under what conditions HIS work, and determine the safety and effectiveness of the system [1], at the same time as it allows to collect evidences about the good practices, effects and impacts of the technologies.

Studies related to the evaluation of technologies using the most diverse techniques have been performed since 1960’s [11]. In the case of HIS, most of these studies started by focusing on how these technologies are related to professionals, management and user involvement, coming later to integrate issues related with lessons learned with the process of HIS development [18]. From the 1980s studies related to user acceptance and adoption technologies in health care organizations began to appear [18].

Regarding evaluation in the HIS development, some reference studies have appeared in the last two decades; the Kaplan and Shaw’ [19] study is noteworthy as it presents a review of how aspects related to people, social and organizational issues have been considered in HIS evaluations, emphasizing the importance of using evaluation mechanisms during the whole SDLC. Nykänen and Kaipio [18] also state that the focus of HIS evaluation changes during its life cycle, being that in the implementation phase evaluation often addresses technical aspects, but with a completed system the evaluation focuses on impact on quality of service and on patient care. Gremy et al. [20] and Kushniruk [9] also present frameworks that support the evaluation along the SDLC. While the former highlight the role of humans in the five step evaluation process, i.e., conception, preparation of machine, execution of the program, output, and general impact; the framework proposed by Kushniruk [9], considers the evaluation process as an iterative process, using qualitative methods in different stages of a SDLC. Ammenwerth and De Keizer [5] found interesting developments in evaluation research in a period of 20 years (1982–2002), prevailing in these studies the explanatory research and quantitative methods.

It should be noted that in the last two decades, there have been many evaluation studies that focused on the users’ perspective and the usability of the systems, due to the recognition of the importance of good usability of HIS and its impact on the health care practice [9, 21]. The evaluation with focus on usability techniques during the development process has also aroused the interest of many researchers in this area of knowledge [9, 10, 22].

In general, HIS evaluation is related with the human, technology and context of utilization. However, it is complex and represents a major challenge due to the complex nature of the health care domain and the object to be evaluated [3]. In order to minimize this difficulty, several studies have appeared with proposals of HIS evaluation frameworks, taking into account human, social, organizational, and cultural aspects [2, 12, 19, 23, 24]. The study by Yusofa et al. [2] presents the analysis of some HIS evaluation frameworks, concluding that “these frameworks complement each other in that they each evaluate different aspects of HIS pertinent to human, organizational and technological factors”, and “not provide explicit evaluation categories to the evaluator”. Andargoli et al. [7] also present an interesting review of the literature on evaluation frameworks, classifying them in: (i) SDLC focused; (ii) generic; (iii) social relationships focused, and (iv) behavioural focused evaluation frameworks. Based on this study, the authors conclude that, although there are several frameworks, there seems to be a lack of consensus regarding what to evaluate, when to carry out the evaluation, and how to conduct the evaluation.

In another perspective, and according to Symons [25], an effective evaluation of an Information System (IS) requires a comprehensive understanding of the interaction between content, processes and context. Content refers to the characteristics of the technology in study; Context refers to the environment in which the implementation takes place; and Process is the way through which the implementation is conducted. The Content, Context, Process (CCP) model was originally proposed by Pettigrew [26] in the scope of management and organizational change. Later, this model was adapted by Symons [25] for the IS evaluation, currently being used by several other authors to overcome the limitations of the generic frameworks of IS evaluation, since the CCP model represents a flexible solution that can easily be extended to any problem and application domain.

Some authors attempted to accommodate in a single model the answer to what, how, when, why and who questions, and the CCP framework. The Content dimension of the model refers to a particular area under examination and is concerned with the subject of evaluation – what. The process dimension focuses on questions of when evaluation takes place, and how evaluation is performed. The Context dimension aims to capture why evaluation is carried out and who is involved in the evaluation. [6, 7].

3 Methods and Tools to Evaluate HIS in Development Life Cycle

Regarding the evaluation methods in the HIS development, some studies argue that the best ones emerged from the Cognitive Sciences and Engineering of the Human Factor and can be applied at any phase of the SDLC (formative evaluation), or even in the final stage after the developed product (summative evaluation) [6, 17, 18, 23, 27, 28]. This argument is not oblivious to the fact that all Interactive Information System (IIS) that are designed to be used by people must meet all their needs and expectations as, if not, it may lead to dissatisfaction and consequently to the rejection of the product. If on one hand the IIS is characterized as a functional product that exists to help solving a problem within an organization and allowing increased organizational efficiency, on the other, the IIS without the user component does not make sense, and thus the user emerges as one of the central elements in the context of IISs.

The framework described in Fig. 1, supported in the models of Stockdale and Standing [6] and Andargoli et al. [7], maps the answers to what, how, when, why and who questions into the CCP model.

Fig. 1.
figure 1

Framework mapping the answer to what, how, when, why and who into the CCP model

Comparing with the previous models presented by Stockdale and Standing [6] and by Andargoli et al. [7], this framework adds the order in which the answers to the questions “what, how, when, why and who” must be found.

Before any evaluation process, the motivation must be found, i.e., the reason that led to the need for the evaluation. Thus, the answer to the why question should be the starting point, since it represents the reason which the evaluation will be done. Given this motivation, we should look at how to achieve that motivation through content, i.e. by answering the ‘what to evaluate’. Taking into account the content, it is possible to identify the process, namely, at which stage the evaluation product should be performed (when), how this evaluation should be accomplished in terms of the methods to be used, as well as, which stakeholders should be involved in the evaluation process (who). It should be noted that the stakeholders to be involved heavily depends on the evaluation phase and the method chosen for this evaluation.

Thus, the models to be used in an evaluation process (how) are determined by the phase to be evaluated (when), by the type of evaluation to be performed (what), which in turn depends on the objective that is in the origin of the evaluation (why).

As mentioned there are formative and summative approaches containing different evaluation measures, some of them focusing on economic criteria and others on user-oriented criteria. While formative evaluation aims to provide feedback to the designers and programmers and is focused on the user-oriented criteria, the summative evaluation is concerned with assessing the outcome after the technology is completed, and usually can focus on either user-oriented or economic criteria. In particular case of formative evaluation, it is important to collect measurements during the pre-implementation stage to establish a basis for comparison, and during the implementation to evaluate changes and make adjustments into the process.

Focusing on user-oriented methods, these can be used in the scope of a formative evaluation to ensure that the product under development takes into account the needs of the users, and also in the context of a summative evaluation, to ensure that the final product conforms to the pre-defined specifications and matches the users’ expectations in terms of usability.

Regarding the evaluation methods, several tools classified in qualitative, quantitative and mixed-methods are available [7, 17, 29].

Table 1 presents some of the most used methods in the context of HIS evaluation, having the ones that combine several techniques presented better results. These methods can be used in the different phases of SDLC.

Table 1. Most used methods and tools to evaluate HIS in SDLC

Figure 2 presents the most appropriate methods for each development phase, although any of them can be used in a combined approach at different stages of development.

Fig. 2.
figure 2

Main methods and tools used in HIS development

In order to understand the problem – problem analysis phase –, it is necessary to be aware of the processes and the activities involved. In this stage the workflow analysis and job analysis represents the most suitable methods.

The requirements analysis stage – system analysis –, which also arises in a preliminary stage of development, requires the knowledge of the user’s tasks, their needs, as well as the understanding of the users’ mental model. During this stage, several methods can be applied, from traditional social sciences (such as observation, interviews, focus groups, ethnography, and questionnaire) to methods that evaluate socio-cognitive aspects (such as task analysis, video analysis, and cognitive walkthroughs using low-fidelity prototype) [9, 10, 30].

In the project stage – system design phase –, the evaluation techniques are particularly useful because they ensure that the model includes all the requirements described in the system specification. In this phase (and since the representation is abstract, usually in UML or another graphic representation language), the method of prototyping (low-fidelity and/or high-fidelity prototype), coupled with methods from usability engineering, such as usability tests and heuristic evaluation, are the most suitable for evaluation [31]. The questionnaire could also be used in conjunction with usability tests, as well as task analysis, video analysis, and cognitive walkthroughs to validate the mental model.

In the development stage – coding and test phase –, the same techniques used in the project phase are used, and at this stage the low-fidelity prototype is no longer necessary, since all the tests can already be executed using components or the final product (evolutionary prototype or high-fidelity prototype).

Finally, after the product is developed – support –, we have the usability tests, video analysis, heuristic evaluation, and logging (in field studies) to ensure that the product is usable, correctly used and well accepted by the user community. Field studies are evaluation studies that usually occurs in natural settings to know how people interact with technology in the real world [32], and is widely used with the logging method. From an economic perspective, we have randomized trials, cost-effectiveness analyzes and cost-benefit analyzes techniques.

4 Examples of HIS Development Using Evaluation Methods

The hemo@care (see Fig. 3a) and hemo@record (see Fig. 3b) applications represent two types of web-based HIS, developed using several evaluation methods. The hemo@care is a local system, more specifically, a web-based application to manage haemophilia-related information in a central hospital located in Portugal [13, 33]. The hemo@record is a national system, i.e., a web-application to support a national registry of haemophilia and other congenital coagulopathies in Portugal [34,35,36].

Fig. 3
figure 3

(a) Hemo@care (b) Hemo@record

In terms of complexity of the information, although of hemo@care represents a local system, it contains information with a greater level of detail comparing with hemo@record. In terms of requirements, in the case of the former, and because it is a local system, the requirements were collected in a single hospital, involving clinicians, nurses and people with hemophilia (PWH). In the later, as it represents a national system for the exclusive use of clinicians, the requirements have been collected from a group of clinicians that work in hemophilia care from several hospitals located in different Portuguese cities.

Given the particularities of each system, the evaluation methods used in the development process of each one were different.

In order to understand the problem context in the case of hemo@care, workflow analysis techniques complemented with documentation analysis were used [10]. Since hemo@record did not fit into the specific organizational system, the understanding of the problem component was performed using benchmarking techniques, analyzing several national registration systems that were already successfully implemented in other countries [37].

Regarding the requirements elicitation, techniques such as documentation analysis, direct observation, ethnography, focus group, interviews, and task analysis were used, following a triangular approach [8, 10, 30]. The task analysis method was particularly useful for understanding the user’s mental model, in a complex requirements elicitation context [30]. In the particular case of hemo@record, the focus group was the technique that most contributed to the final result, being the meetings mediated with the help of a collaborative prototype [31]. The usage of a prototype at this stage promoted the requirements elicitation, and at the same time assisted in the conversion of implicit knowledge (user experience) in explicit knowledge (documented knowledge).

At the design and development phase, in the case of hemo@care, evolutionary prototyping and task analysis were the tools that most stood out, and were complemented with heuristic evaluation, usability tests and questionnaire. Regarding the task analysis in this stage, it allowed to validate the functionalities previously found and simultaneously allowed to understand the user mental model in order to find the best sequence to present information within the scope of the functionality. For hemo@record, the methods which provided better results were the prototyping through mock-ups [31], as well as the heuristic evaluation. The prototype in this stage allowed validate de functionalities previously identified and find new requirements.

It should be noted that in both cases the application development processes followed an iterative and incremental approach.

The experience of the development of these two HIS allowed us to conclude that, given the nature of the problem and the type of HIS, we should carefully choose the evaluation techniques to support the development of technology. This is justified by the fact that the same techniques can provide different results according to the type of system, so it is necessary to adjust the method to the problem in study.

5 Summary and Conclusions

Evaluation of HIS development is a very important procedure to determine the impact of technology, as well as to assure the quality of the final solution.

Moreover, besides the importance of HIS evaluation, the challenge of an evaluation process is greatly due to the complex nature of the health care domain, the objects to be evaluated, and the comprehensiveness of the concept of evaluation itself. The literature reports the lack of consideration of some evaluation aspects in HIS development, more specifically in terms of the methods to be used and phases of the SDLC in which these methods should be applied.

The present work described a study that explored the main methods of HIS evaluation to support the development, identifying in which stage of the SDLC these methods can be applied. Additionally, this work presented the reasons for the evaluation of such systems, illustrating these issues with the example of two real case studies of HIS implementations, in which some of the methods were successfully applied.

From this investigation emerged a proposal of a framework that portrays the mapping of the five evaluation questions – what, how, when, why and who – in the Content, Context, Process (CCP) model, highlighting the order in which those questions should be answered. Additionally, a model based on literature review and fine-tuned with the results of the practical experience from implementing two HIS – hemo@care and hemo@record – that assists in the choice of the evaluation method according to the SDLC phase, was proposed.

Regarding the evaluation methods at each stage of the SDLC, the contextual methods that involve the user in his/her work environment, such as workflow analysis and job analysis, direct observation, ethnography and on-site interviews for the phases of problem understanding and requirement analysis should be highlighted. In the project phase, the prototyping technique represents a good alternative to simulate real scenarios, and presents excellent results when used in conjunction with usability tests and heuristic evaluation. Also the task analysis technique is an important method in the project phase to aid in the understanding of the mental model and the conversion of tacit knowledge in explicit knowledge. In the development phase, although it makes sense to use the same techniques used in the design phase, high-fidelity prototypes, more specifically evolutionary prototypes, developed with the same technology as the final solution should be preferred.

Finally, it is important to note that the complete evaluation process should take into account an iterative and incremental development approach, thus allowing the possibility to detect and correct previous failures.