E-Government readiness index: A methodology and analysis
Highlights
► We focus on measurement and analysis of e-Government readiness and develop a set of e-Government readiness indices. ► We build upon the United Nation's current index and develop new indices based on Principal Component Analysis (PCA). ► We provide guidelines and statistical justifications for index composition, weighting scheme, and application. ► Our study has implications for policy making and contributes to other similar index construction exercises.
Introduction
The key objective of our paper is to address the scope and constitution issues of e-Government readiness indices using methodological tools. More specifically, we methodically examine the e-Government readiness index maintained by United Nations' Department of Economic and Social Affairs (UNDESA), to address the questions of what specific elements should be included in and how the constitutive elements should influence the indices that measure and track e-Government readiness. Using complementary methodological approaches, we propose to address the issues of scope and constitution by developing alternative indices. We compare and contrast the alternative indices developed in this paper with each other, and also with the UNPAN index.
The use of information technology to govern has increasingly been the focus of several nations in the last few decades. Broadly termed electronic governance or e-Governance, it is an overarching concept that represents the use of information and communication technologies (ICTs) for delivering services to citizens as well as for orchestrating intra-government functions (Sheridan & Riley, 2006). Governments and policy making entities across nations have undertaken initiatives to formalize and exploit the role of ICTs in enabling and delivering governance functions. Concurrent with the policy initiatives, academic interest in the topic has grown and research has explored various constitutive elements of e-Governance, and the nature of adoption and impacts of e-Governance projects (Davison et al., 2005, Grant and Chau, 2005, Grönlund and Horan, 2004).
In this paper we focus on e-Government readiness that represents a particular area of policy-making and research within the broader umbrella of e-Governance initiatives. e-Government readiness primarily assesses the extent to which governments or economies are equipped to deliver various governmental services online and exploit ICT for internal functioning of the government (Al-Omari and Al-Omari, 2006, UNDESA (United Nations Department of Economic and Social Affairs), 2008). Multiple initiatives, undertaken by international organizations, consulting firms, and academic investigators, have sought to measure and operationalize the various aspects of readiness. While multiple projects aimed at measuring readiness have enriched the debate, it has also given rise to the question of convergence and consistency. Critics have argued that individual studies, motivated and initiated by specific stakeholders, focus on divergent sets of factors and are tied to specific contexts (Ojo, Janowski, & Estevez, 2007). For example, some studies focus on specific regions like Europe (Bannister, 2007, Ojo et al., 2007), and other studies have progressively modified scale-items over the years (Ojo et al., 2007), making comparisons of indices across time and geography somewhat difficult. Hence, as research on measuring e-Government readiness has gained recognition, the concept itself has become somewhat fragmented. Given the fragmented nature of e-Government readiness assessment, critiques have called for a more focused understanding of e-Government readiness (Bannister, 2007, Ojo et al., 2007).
The call for a more universal and consistent view of e-Governance readiness has alerted us to three key issues that characterize the existing efforts. First, what can ideally be thought of as constituting e-Government readiness when it comes to the scope of assessment and the selection of indicators? Across studies, the nature and number of variables used as measurement items have differed substantially, reducing scopes of cross-comparisons or development of a focused understanding of e-Government readiness. Second, how might individual variables contribute toward the overall indices, for which there needs to be comprehensive justification? Evidently, not all facets of technology can be considered equally important, and hence, the basis for weighting the contribution of individual elements toward the overall indices has varied significantly. Finally, from an implementation viewpoint, will the indices be suitable for certain users given that the indices have often been context specific? The specific settings and assumptions used in constructing the indices may require matching the indices with appropriate purposes.
The critical gaps outlined above need both conceptual and empirical examination. From a strictly conceptual point of view, one can attempt to redefine and streamline the concept and constitutive measures of e-Government readiness; from a rigorous empirical examination, one can also shed light on the existing indices and provide critical insights on their scope, constitution, and applicability. We choose to focus on the empirical route for multiple reasons. First, while significant resources have been expended to gather data on e-Governance readiness, there has been a singular lack of rigorous empirical examination of the acquired data (Ojo et al., 2007). The indices themselves have been based on straightforward and rudimentary combinations of constitutive items, and no systematic evaluation of the items, their inter-relations, or their contributions toward the overall indices has taken place. Additionally, there has been little examination done to validate the scales as inter-index relationships, as predictive instruments, or as covariates of other critical variables. Second, while re-conceptualizing, commissioning, and executing a new index study would need significant investments, an empirical examination of the index would attract infinitesimal additional cost. Finally, through an inductive approach, results of an empirical examination can help us develop alternative theoretical perspectives that would better inform future index conceptualizations.
We propose to address the issues of scope and constitution of indices by examining the internal structure of the data, and by exploiting the variances and co-variances of the items. We specifically build upon the UNPAN data and e-Government readiness index, and employ principal components analysis (PCA) to develop our indices. Our choice of methodology allows us to develop an empirical justification for including or excluding specific variables and also to create indices based on the relative contributions of specific variables towards the constitution of the index. Using specific assumptions and two different approaches, we develop four different indices and examine the resulting rankings of the nations vis-à-vis the ranking based on UNPAN index.
The rest of the paper is organized as follows: the next section reviews the relevant literature on e-Government readiness measures; the following section describes the existing method for e-Government readiness index used by UNPAN; the subsequent sections present our methodology, analysis, and results; and the final section discusses theoretical and policy implications and limitations, and future research directions.
Section snippets
Benchmarks for e-Government and e-Government readiness
E-Governance represents a paradigm wherein governments across economies strive to use ICTs, specifically the Internet, to deliver services to citizens and link intra-governmental functions. The enormous gains in process efficiencies and waste elimination that the private sector has enjoyed as a result of using ICTs have been viewed very favorably by policymakers who have intended to use technology for similar gains. Apart from motivations to increase efficiencies of government functions, it has
Proposed methodology
The PCA technique has traditionally been used by practitioners to transform a large set of correlated variables into a smaller set of uncorrelated variables, called the principal components, which account for most of the variation in the original set of variables. Since the principal components are linear combinations of the original variables with mathematically determined characteristic vectors of the covariance (or correlation) matrix representing weights, it can be argued that PCA resolves
Discussion and implications
The results obtained in the previous section provide critical insights regarding both the scope and constitution of the e-GRI. We first discuss the issue of constitution, highlighting the key findings related to how components of the index contribute toward the overall index. We must emphasize here that we basically treat the coefficient values for variables within given principal components as their weights or relative contributions toward index composition.
Anteneh Ayanso is an Associate Professor of Information Systems at Brock University at St. Catharines, Canada. He received his Ph.D. in Information Systems from the University of Connecticut and an MBA from Syracuse University. His research interests are in data management, electronic business, quantitative modeling and simulation in information systems and supply chains. He has published in journals such as Communications of the AIS, Decision Support Systems, European Journal of Operational
References (32)
E-Government leadership: Building the trust
(2006)- et al.
E-Government readiness assessment model
Journal of Computer Science
(2006) The curse of the benchmark: An assessment of the validity and value of e-government comparisons
International Review of Administrative Sciences
(2007)E-readiness assessment publication: e-Readiness overview
(2005)Online availability of public services. Web-based survey on electronic public services
Online availability of public services. How is Europe progressing? Web-based survey on electronic public services
- et al.
From government to e-Government: A transition model
Information Technology & People
(2005) - et al.
Developing a generic framework for e-Government
Journal of Global Information Management
(2005) - et al.
Introducing e-Gov: History, definitions, and issues
Communications of the Association for Information Systems
(2004) - et al.
Multivariate data analysis
(2006)
Assessing e-Government progress — why and what?
If you measure it, they will score: An assessment of international e-Government benchmarking
Information Polity
Discarding variables in a principal component analysis I. Artificial Data
Applied Statistics
Discarding variables in a principal component analysis II. Real Data
Applied Statistics
Evaluating the progress of e-Government
Information Polity
Multivariate statistical methods
Cited by (50)
Digital Marketing Utilization Index for Evaluating and Improving Company Digital Marketing Capability
2022, Journal of Open Innovation: Technology, Market, and ComplexityTricks with the BoD model and an application to the e-Government Development Index
2022, Socio-Economic Planning SciencesCitation Excerpt :For example, Whitmore [24] argued that it lacks an empirical and/or a theoretical basis and for this reason, he used Factor Analysis to select a subset of the (relevant) sub-indicators proposed by UN and to obtain the corresponding aggregation weights. Ayanso et al. [23], on the other hand, relied on Principal Components Analysis to obtain the aggregation weights for the TII and the HCI as well as for the e-GDI. As an alternative to Whitmore [24] and Ayanso et al. [23], we first re-estimate the e-GDI using the BoD model.
An Efficiency Measurement of E-Government Performance for Network Readiness: Non-Parametric Frontier Approach
2022, Journal of Open Innovation: Technology, Market, and ComplexityService quality, perceived value, and citizens’ continuous-use intention regarding e-government: Empirical evidence from China
2020, Information and Management
Anteneh Ayanso is an Associate Professor of Information Systems at Brock University at St. Catharines, Canada. He received his Ph.D. in Information Systems from the University of Connecticut and an MBA from Syracuse University. His research interests are in data management, electronic business, quantitative modeling and simulation in information systems and supply chains. He has published in journals such as Communications of the AIS, Decision Support Systems, European Journal of Operational Research, Journal of Database Management, International Journal of Electronic Commerce, Information Technology for Development, International Journal of Healthcare Delivery Reform Initiatives, as well as in proceedings of major international conferences in information systems and related fields. In addition, he has contributed chapters to several books. His research in Data Management has been funded by the Natural Sciences and Engineering Research Council of Canada (NSERC).
Dipanjan Chatterjee is an assistant Professor of Information Systems at Brock University at St. Catharines, Canada. He received his Ph.D. from Rensselaer Polytechnic Institute (RPI). His research investigates how IT impacts the existing structures and processes of inter-organizational relationships and explores the tactical and strategic choices that firms need to make while deciding on their business-to-business information systems. His research is published in IEEE Transactions on Engineering Management, Information Systems and e-Business Management, and major international conferences in information systems and related fields.
Danny Cho is a Professor of Information Systems and Operations Management and Associate Dean of Research and Graduate Programs at the Faculty of Business, Brock University. He holds a B.A.Sc. and a M.Eng. in Industrial Engineering from the University of Toronto, and a Ph.D. in Management Science and Information Systems from McMaster University. He has published and has forthcoming papers in journals such as European Journal of Operational Research, Empirical Economics, Optimal Control Applications & Methods, International Journal of Systems Science, Journal of the Operational Research Society, Journal of Supply Chain Management, Journal of Enterprise Information Management, Social Indicators Research, among others. His research areas include supply chain network design, supplier selection and purchasing decision, purchasing managers' index, maintenance and reliability, quantitative modeling in information systems, business economics, and quality management. His research has been funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Social Sciences and Humanities Research Council (SSHRC) of Canada.
- 1
Tel.: + 1 905 688 5550x5393; fax: + 1 905 378 5723.
- 2
Tel.: + 1 905 688 5550x4447; fax: + 1 905 378 5723.