Elsevier

Telematics and Informatics

Volume 34, Issue 8, December 2017, Pages 1736-1771
Telematics and Informatics

Integrated feedback control reporting for improving quality of technical service reporting in IT service management

https://doi.org/10.1016/j.tele.2017.08.007Get rights and content

Highlights

  • Quality issues of technical service reporting are defined for large ITSP in ITSM.

  • A process model is implemented to improve technical service reports (TSRs).

  • Design principles are provided to improve TSRs.

  • An ensemble artifact is implemented to deliver improvement loop for TSRs.

Abstract

Service Reporting is an essential part of Service Level Management (SLM) in IT Service Management (ITSM). These SLM reports show Service Quality status in compare to Service Level Target (SLT) based on Service Level Agreement (SLA). However, producing Technical Service Reports (TSRs) for SLM has Data Quality (DQ) challenges when IT services of a large enterprise are outsourced to a large IT service provider (ITSP). In fact, sources of technical metrics in these reports are coming from huge, unverified and non-normalized system-generated events and logs in a large enterprise environment. In addition, there are lacks of configuration items and service information meta-data that are essential for producing these SLM reports. These challenges lead to low intrinsic and contextual DQ of reports that destroy customer’s trust and management visibility which lead to Service Quality (SQ) issues and financial penalties. This research used Action Design Research (ADR) methodology and insider participant observation, focus group, interview and questionnaire data gathering and content analysis. Design, actions and data collection are done in and from a Multi-National Company (MNC) in Malaysia that provides IT services to one of the top 10 biggest enterprises in the world, during 4 years of research. The research implemented artifact is an integrated reporting system. It used Feedback Control System (FCS) theory and ITSM concepts to create Reports Data Quality Improvement Loop (RDQIL) to improve TSRs’ DQ. The research output is an ensemble artifact which formalized in the class of problems, class of solutions and set of design principles that consist of new Hybrid Normalize Data Warehouse (HNDW) architecture and Integrated Feedback Control Reporting (IFCR) process model. The importance of research emerges by improving SQ and reducing the risks of financial penalties by providing complex event processing and service reporting integrated into Operation processes. The research’s design principles and IFCR process model create RDQIL that improves TSRs’ DQ in large ITSPs which lead SQ improvement and prevent financial penalties.

Introduction

Information Technology (IT) services are part of each business and organization these days which like all other business constructs require being well managed in order to bring its highest value. Rockart (1982) listed the Critical Success Factors (CSFs) of Information Systems (IS) and addressed the most obvious CSFs for service quality (Rockart, 1982). After that and also during IT maturity in last 30 years, companies are facing challenges to move from production and software to IT operation based on service (Marrone and Kolbe, 2011). This perspective and phenomenon in IT have emerged IT Service Management (ITSM). ITSM focuses on IT service creation, design, delivery, and maintenance to implement and manage the quality of IT services in a business and to play an important role in the success of that business in using IT (Hanna, 2011). ITSM has high impact in the success of an organization which has chosen IT as a service (Marrone and Kolbe, 2011). To monitor the quality of service by both customer and supplier management, ISO/IEC 20000 introduced Service Reporting part of ITSM to monitor trends and performance against service targets based on an agreed document between two parties (IEEE Std. 20000-1, 2013).

Service Level Agreements (SLAs) are the contracted measures that describe the services to be delivered. They also specify the metrics with which the effectiveness of service activities, functions and processes will be measured, examined, changed and controlled (Maurer and Matlus, 2007). In fact, SLAs and their associated agreements are great tools for enabling customer managers to measure, audit and align their outsourcing relationships to reach a successful IT outsourcing (Maurer and Matlus, 2007). ITSM and especially Information Technology Infrastructure Library (ITIL) as a famous major implemented framework in ITSM (Galup et al., 2009) define service reporting as the main process of SLA monitoring and the one which is responsible for providing Key Performance Indicator (KPI) reports for Service Level Management (SLM) (Britain et al., 2011). These service reports deliver to the customer through SLM processes. Many IT service organizations consider the measurement of IT service management processes, especially service support processes, as a challenge (Lahtela et al., 2010).

Some companies outsource their IT services to a third organization called IT Service Provider (ITSP). ITSP is an organization supplying services to one or more internal customers or external customers (Hanna, 2011). In outsourced IT Services, ITSPs need to provide service reporting based on their SLA for the customer. However, SLA monitoring is an open issue within the IT Service Management domain (Correia and Brito e Abreu, 2010). Based on Gartner’s (2012) interaction with over 85 clients, some conflicts between green performance KPI and red business impact bring doubts to quality or usefulness of services’ reports to customers (Ackerman and Maurer, 2012). Green performance is service level upper than agreement and red is lower than minimum agreed level. These ITSM performance measurement issues have research gap which also has been addressed by a comprehensive systematic literature review in ITSM till April 2012 (Proehl et al., 2013) by reference to Gacenga researches (Gacenga et al., 2011).

In ITSM, SLA contains penalties for some defined target metric in service reports with the name of Service Level Target (SLT). In fact, when these service reports tied to financial penalties, it becomes critical and valuable because it shows the value of IT services to both business and service provider (Adams and Govekar, 2013). SLA defined earn-back and cost for both IT service provider and customer which are based on defined target metrics that contractually guarantee service provider obligation to the quality of service (IEEE Std. 20000-2, 2013). The SLT is part of service reporting, and quality of it is the concern of both service provider and customer. The quality of these reports are becoming crucial when metric did not define based on simple process or system output and also it requires lots of data gathering, verification and calculation. Even some researchers (Paschke and Schnappinger-Gerull, 2006) and ITSM frameworks like ITIL clearly suggest to avoid these kind of metrics in SLA (Hunnebeck et al., 2011) or some researchers have introduced a framework or method to define and improve performance monitoring and metrics maturity in organizations (Gacenga et al., 2011) but customers like this research case have defined these complex metrics as a must in their service requirements and contract.

Service reports that are based on technical metrics definition require system-generated logs to be produced. Technical metrics are the type of performance metrics in service reporting that is based on the end result of service which requires system-generated event logs from IT infrastructure for its calculation like “percent of backup success” and are not based on a process like “percent of successful change”. Reports that contain these explained technical metrics define as Technical Service Reports (TSRs). Generating TSRs are more complex and costly because system-generated logs need experts’ verification, data cleansing, and complex processing to fit with all business rules and condition defined in SLA. In fact, an expert must exclude testing issues or any kind of issues which are not genuine based on SLA definition or exclude log noises that are not real service result. These verifications are costly and time-consuming (Correia and Brito e Abreu, 2010) which make the report delivery and report monitoring out of expected time. However, Operation and customer need reports in a right-time that be believable (a data quality dimension). An acceptable report in service reporting is a report which shows all customer services with correct status and matches to real implementation and SLA which represent the final verified figure of a metric (Paschke and Schnappinger-Gerull, 2006). These difficulties mostly lead to manual data cleansing logs and reports data which are time-consuming and lead to a late delivery report. The process of data cleansing and data quality improvement has lots of open problems and highly is domain related and need to explore for each specific context (Müller and Freytag, 2005).

Moreover, manual report generating by spreadsheet software (like Microsoft Excel) cannot cope to some complex formal definitions in SLA and contract such as complex service level in different asset type, location, customer or situation. The manual reports mostly could be based on simple system-generated events and uptimes which mixed by testing and valid events (Correia and Brito e Abreu, 2010). These all challenges lead to manual or semi-manual generation of technical metrics for SLM purpose with a low quality which brings doubt and dispute of customer.

In addition to system-generated logs, there are other Data Sources (DSs) that must be used in the process of reports generation such as Asset Management System (AMS) and Configuration Management Database (CMDB). AMS is a system on top of Asset Management (AM) process that manages activities or processes of tracking and reporting the properties, value, and ownership of assets throughout their lifecycle (Hanna, 2011). Besides, CMDB is a database that is used to store configuration records and their attributes throughout their lifecycle (Hanna, 2011). AMS and CMDB contain some meta-data that is required for reports. Meta-data in this research refers to any complementary data that are required beside system-generated logs to produce Technical Service Reports (TSRs). These meta-data are Configuration Items (CIs), Service Catalogues Information and some service definitions, rules, and categories. CI is any component or service asset that is required to be managed and stored in order to deliver an IT service (Hanna, 2011). Besides, service catalog information or in abbreviation Service Information (SI) is structured information about live IT services. (Hanna, 2011). However, completeness and accuracy of these meta-data and DSs are depended on the maturity of their process and their adoption in the organization. In fact, process design deficiencies like “Incomplete representation”, “Ambiguous representation” and Operation deficiencies would lead actual data in reality that are required for generation of reports to be missed (Wand and Wang, 1996). Thus, because of direct dependency between Service Reports (SRs) and these data sources, Data Sources’ Data Quality (DSDQ) issues have a direct impact on Reports’ Data Quality (RDQ) issues and become a part of the problem.

Whilst, some researchers have begun to examine Data Quality (DQ) and Information Quality (IQ) (Ahn et al., 2008, Blake and Shankaranarayanan, 2015, Madnick et al., 2009, Pipino et al., 2002, Wand and Wang, 1996, Wang and Strong, 1996), but little focus has been paid to more specific area of data quality in reporting of IT services. Lots of researchers discuss data quality for Information systems, EIP, data warehouse and decision systems when data sources are other information systems or human-generated sources (Madnick et al., 2009). However, no attention has been paid for system-generated data in complex IT services. Thus, it remains to be seen how SLM reports’ data quality in this context can be improved and, DQ of Technical Service Reports (TSRs) is the gap that this research is focused on. To be more clear on the differences between terms of DQ, IQ and Report Quality (RQ), researchers followed Madnick and consider DQ and IQ in a same scope, opposite others who define DQ to be more on technical issues and IQ to be non-technical issues (Madnick et al., 2009). RQ in this research considered as an overall quality of Reports and not only data of reports that are more focuses on representation quality and considered out of this research scope. Besides, researchers adopted “fitness-of-use” concept for its quality perspective (Wang and Strong, 1996) and define Reports’ Data Quality (RDQ) as a subset of Reports’ Quality which leads to accurate SLT reporting and customer satisfaction.

During literature review, some researchers confirmed and explained quality issues in ITSM service reports (Ackerman and Maurer, 2012, Gacenga et al., 2011) case organization also faced the quality issue on technical metrics reports. In the research case organization, backup technical service reports contain around 800,000 entries which made it impossible to fully review them manually in the time of submission. On the other hand, random checks by customer showed discrepancies which lead to dispute and distrust of customer. Case investigation showed these report quality are low and trust of the customer to service report has been lost so that it leads to a dispute with the customer. This dispute could lead to 1.5 million USD penalties per month for case organization. This presented as the main problem statement of the research as “Technical service reports in IT service management for large enterprise are facing quality issues.”

This research discusses the approach of improving technical service reporting quality in Service Level Management for outsourced IT. The purpose of this study is to address the current quality issues, concerns and solution of service reporting in ITSM by developing and evaluating an ensemble service reporting artifact. Based on the literature, the case investigation, problems statement and research questions were formulated. The main research question of this research is formulated as:

  • How the quality of technical service reports in ITSM context can be improved?

Based on the main research question and problem background, four sub-questions are formulated to support the main research questions. These questions are:

  • What are the technical service Reports’ Data Quality (RDQ) issues in ITSM service reporting?

  • What process model can be used to improve the technical service reports in large IT service providers?

  • What design principles can be used to implement a service reporting system to improve the technical service reports?

  • How the artifact of technical service reports generation can be designed to reach data quality improvement loop in large IT service providers?

Based on the research questions, four research objectives under a main objective are defined. The main goal of this research is “To design and implement an ensemble artifact to improve the technical service reports on outsourced IT services and ITSM context”. In fact, it is to create a data quality improvement loop that corrects data quality issues to reach customer and ITSP satisfaction with reports’ data quality. Accordingly, three objectives of this study are:

  • To identify quality issues of technical service reporting for large IT service providers in IT service management.

  • To design and implement a process model for improving technical service reports in large IT service providers.

  • To provide design principles to improve the technical service reports in large IT service providers.

  • To achieve an ensemble artifact and implement a system to deliver improvement loop for technical service reports in large IT service providers.

In reality, reaching to perfect quality when none of the stakeholders require that degree of quality is not necessary and even may not possible. Thus, improvement loop, improve quality till it is required by the customer. In fact, customer unsatisfaction or data quality issues trigger or increase the improvement loop and customer satisfaction slow or stop the loop.

This study uses a large Multi-National Company (MNC) IT Service Provider (ITSP) in Malaysia as the case for an Action Design Research (ADR). Case company (To remain anonymous is named ITSPMY in this report) provides IT services to more than 20 big companies. This research focused on ITSPMY department that service to an Oil and Gaz company which is in top 10 of Fortune Global 500 (To remain anonymous is named CUSOGT10 in this report). For this research observation (A3, A7), interview (A1, A4), focus group (A8) and questionnaire (A10) were used to collect data from a single case organization to implement an ensemble artifact as a new reporting system and processes. Also, Oracle database and its technologies were used for implementation and NVIVO software was used for categorizing and coding the collected data (part of A11).

The success of all service management processes is dependent on the use of the information provided in service reports (IEEE Std. 20000-2, 2013, p.37). ITSPMY service provider experienced Service Level Target (SLT) verification challenge that leads to heavy penalties. In addition, without a trustable report, management cannot improve operation and Service Quality. Thus, improving report quality which linked to Service Quality prevents penalties and increases customer satisfaction. This research aims underpin improved reporting and monitoring of service delivery in operation units of a large IT enterprise.

Research Design Principles (DP) output will give principle for design and implement ITSM service reporting and repository in large IT service providers to reach agreed SLA and prevent financial penalties. Besides, designed artifact provides Complex Event Processing (CEP) for operation context which increases service monitoring quality that leads to Service Quality. In addition, the definition of the class of problems gives the opportunity to other researchers to explore more in this field of study. Moreover, designed Hybrid Normalized Data Warehouse (HNDW) architecture provides a new class of solutions for a broader class of problems to reach higher data quality and utilization in comparison to current Data Warehouse (DWH) or Change Management Database (CMDB) design. Moreover, Integrated Feedback Reporting model provides an understanding of the impact of report usage and feedback on report quality in a similar context which helps an organization to reach Reports’ Data Quality Improvement (RDQI) loop in their Technical Service Reports (TSR).

Section snippets

Technical service reports

Sourcing managers, business managers, and IT managers have different demands, perspectives on IT services, and perceptions of what constitutes quality IT services. Each area has a unique perspective that dictates the measures that should be present in a Service Reports (SR) or dashboard. Sourcing managers need to ensure that the metrics and dashboards selected do a good job of representing the strategic objectives of outsourcing deals. If the objective is simply to deliver IT services more

Research methodology

This section defines, describes and justifies the research’s philosophy, design, methodology and methods that have been used in this research. Researchers have tried to follow Sein, Hevner, Walsham and Baskerville well-known guidelines in the research methodology as their philosophy stand is closer to researchers’ viewpoint and mindset (Baskerville, 1999, Baskerville and Myers, 2015, Sein et al., 2011, Walsham, 2012, Walsham, 1995, Hevner et al., 2004, Hevner and Chatterjee, 2010). This

Problem formulation

Based on ADR methodology (Sein et al., 2011) researchers have done an initial investigation and problem formulation to address all existing problem in the case and the context. This formulation contains all problems, system specifications, characteristic and requirements. The problems and system specification refined and reformulated during the build, intervention and evaluation phases in each iterative research cycle.

In brief, this research is conducted based on a real problem (pragmatism

Build, intervention and evaluation (BIE)

As explained earlier, Build, Intervention and Evaluation (BIE) is the main iterative part of ADR and this research (S2). During BIE, artifact design, build, implement, evaluate and improve iterate till reaching to cycle objectives or un-anticipated problem that prevent reaching cycle goals. This research BIE was divided into 3 main iterations which refer to the cycles. Each cycle followed main distinct solution and ended as soon as new solution or method hired to reach objective or reach a

Generalized outcome

Last principle and stage of ADR methodology is “Generalized Outcome” which refers to formalization and generalizing (S4) research output (Sein et al., 2011). This consists of three levels of generalization which are generalization of problem, generalization of solution and producing design principles from research output (Sein et al., 2011). In this section, the result of all learning, reflection, artifact contribution, class of problems (O11), class of solutions (O3, O4, O5, O12, O13) and

Conclusion, limitations and future work

This Action Design Research answered to its Research Questions (RQs) and achieved Research Objectives (RO).

In brief, researchers tried to understand what are problems and quality issues of TSRs that formed RQ1. To answer RQ1, RO1 defined as “To define quality issues of technical service reporting for large IT service providers in IT Service Management”. Thus, situations, problems, and challenges formulated to list of problems. These problems (O2) linked to Data Quality Dimensions (O5) to show

References (82)

  • D.P. Ballou et al.

    Modeling data and process quality in multi-input, multi-output information systems

    Manage. Sci.

    (1985)
  • L. Bardin

    L’analyse de contenu [Content Analysis]

    (1993)
  • R.L. Baskerville

    Investigating information systems with action research

    Commun. AIS

    (1999)
  • R.L. Baskerville et al.

    Design ethnography in information systems

    Inf. Syst. J.

    (2015)
  • C. Batini et al.

    Data Quality – Concepts, Methodologies and Techniques

    (2006)
  • I. Benbasat et al.

    Empirical research in information systems: the practice of relevance

    MIS Q.

    (1999)
  • Bhattacherjee, A., 2012. Social Science Research: principles, methods, and practices. Switzerland: Global Text...
  • Blake, R., Shankaranarayanan, G., 2015. Data and Information Quality: Research Themes and Evolving Patterns. Paper...
  • M. Bovee et al.

    A conceptual framework and belief-function approach to assessing overall information quality

    Int. J. Intell. Syst.

    (2003)
  • G. Britain et al.

    ITIL Continual Service Improvement

    (2011)
  • Chandler, N., 2012. Hype Cycle for Performance Management, 2012. Gartner Technical Professional...
  • I.N. Chengalur-Smith et al.

    The impact of data quality information on decision making: an exploratory analysis

    IEEE Trans. Knowl. Data Eng.

    (1999)
  • Cole, R., Purao, S., Rossi, M., Sein, M.K., 2005. Being Proactive: Where Action Research Meets Design Research. Paper...
  • Correia, A., Brito e Abreu, F., 2010. Defining and Observing the Compliance of Service Level Agreements: A Model Driven...
  • R. Davison et al.

    Principles of canonical action research

    Inf. Syst. J.

    (2004)
  • R. Dekkers

    Applied Systems Theory

    (2015)
  • A.R. Dennis

    Relevance in information systems research

    Commun. Assoc. Inf. Syst.

    (2001)
  • J.C. Doyle et al.

    Feedback control theory

    (1992)
  • K.M. Eisenhardt

    Building theories from case study research

    Acad. Manage. Rev.

    (1989)
  • C.W. Fisher et al.

    The impact of experience and time on the use of data quality information in decision making

    Inf. Syst. Res.

    (2003)
  • G.G. Gable

    Integrating case study and survey research methods: an example in information systems

    Eur. J. Inf. Syst.

    (1994)
  • Gacenga, F., Cater-Steel, A., Tan, W.-G., Toleman, M., 2011. IT service management: towards a contingency theory of...
  • S.D. Galup et al.

    An overview of IT service management

    Commun. ACM

    (2009)
  • B.G. Glaser et al.

    The Discovery of Grounded Theory: Strategies for Qualitative Research

    (1967)
  • D.R.W. Gregory

    Design science research and the grounded theory method: characteristics, differences, and complementary uses

  • A. Hanna

    ITIL Glossary and Abbreviations

    (2011)
  • A. Hevner et al.

    Design Science Research in Information Systems

    (2010)
  • A.R. Hevner

    A three cycle view of design science research

    Scand/ J. Inf. Syst.

    (2007)
  • A.R. Hevner et al.

    Design science in information systems research

    MIS Q.

    (2004)
  • H.-F. Hsieh et al.

    Three approaches to qualitative content analysis

    Qual. Health Res.

    (2005)
  • L. Hunnebeck et al.

    ITIL Service Design

    (2011)
  • Cited by (2)

    • Implementation and impacts of IT Service Management in the IT function

      2023, International Journal of Information Management
    • A Continuous Dispatch Decision Model for Sustainable IT Service Assignments

      2022, International Journal of Information Systems in the Service Sector
    View full text