Abstract
In times of increasing importance of social media services, we have to rethink information literacy. One of the key assumptions of existing information literacy constructs is “static” information, meaning that information does not change. But compared to traditional and mostly unidirectional media services such as printed newspapers or television, this does not reflect the reality of a social media context. Here, information can be characterized as “dynamic”, meaning that, for example, every user can easily modify information before sharing it (again). A measurement construct covering these novel aspects of information literacy is missing, so far. Thus, the main objective of this paper is to develop a rigor and updated construct to measure and quantify social media information literacy of an individual social media user. We selected a comprehensive construct development framework that guided us through the investigation, and which includes qualitative as well as quantitative analyses. The outcome is a theoretically grounded and empirically derived social media information literacy (SMIL) construct. The paper ends with a discussion of potential future research directions.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In an increasingly digital environment, social media services have become a key channel for individuals to share information and news [1], but are understood and used heterogeneously [2]. Compared to traditional and mostly unidirectional media services (such as printed newspapers or television), these services change the characteristic of distributed information towards being dynamic. Particularly the concept of user-generated content (UGC) implies that users can easily modify information, thus allowing them to add their own opinions or even change the meaning dynamically [e.g., 3, 4].
But besides social media advantages such as high transportation velocity and network effects that help spread important information among large user groups, also disadvantages can be observed. One major disadvantage of social media services and the related UGC is that no trusted authority exists which verifies the quality of information distributed through the services’ networks. For example, it is relatively easy to produce misleading or false information, which is often referred to as fake news [5]. Fake news are omnipresent in today’s world and have the potential to cause massive social and monetary damage on every level, i.e., from an individual to a political or societal level [6]. In this context, the recent announcement of the French president Macron to introduce a law to ban fake news on the internet during French election campaigns [7] and a similar law that came into effect in Germany at the beginning of 2018 [8] emphasize the relevance of this topic. Another trend with increasing importance regarding UGC is electronic word-of-mouth (eWOM), meaning that (potential) consumers exchange information regarding products or brands in social media environments [9, 10].
Obviously, from the perspective of an individual social media user, these developments require certain competencies on how to deal with information [11]. The established term for this is information literacy which contains, among others, the ability to assess the credibility of information and the reliability of sources [12]. Although a relatively large body of knowledge on information literacy exists, two gaps related to this topic can be identified. First, most definitions and conceptual works of information literacy still consider information as “static”, thereby ignoring its “dynamic” character which is one key feature of information in the social media context. Second, there is a lack of rigor measurement construct development. This statement counts for both general information literacy constructs as well as more specific constructs, i.e., those that consider a certain context such as social media. For instance, metaliteracy [13, 14] as an enhanced information literacy concept that aims at covering the dynamic aspects of information in a social media context is based on conceptual work but does not provide any rigor measurement items. Both gaps hinder accurate academic progress (e.g., in terms of empirical studies) as this would require a precise definition and valid measurement items. Thus, the research question of this paper is what comprises information literacy in the social media context and how can we measure it?
Our main objective is to answer this question by developing a rigor measurement construct of what we call social media information literacy (SMIL). We have selected the established construct development guideline of MacKenzie et al. [15], which serves as the methodological framework of our study. Starting with developing a conceptual definition of SMIL and the identification of respective items, several stages are proceeded to ensure content validity and general scale validity, thereby conducting both qualitative as well as quantitative methods. Our main contribution is a rigor construct to measure information literacy of an individual social media service user. In other words, we develop a way to quantify social media information literacy.
The structure of this paper is as follows. First, the concept of information literacy and its current state of research are briefly outlined. After that, the development of the SMIL construct according to the step-by-step guideline of MacKenzie et al. [15] is presented. The paper ends with a discussion of future research opportunities and applications of the SMIL construct.
2 Information Literacy and Social Media
Since the rapid increase and the abundance of published information online, research regarding information literacy (IL) is becoming increasingly important. The term has gained momentum but also reveals limitations in new application areas, such as social media, where UGC is changing dynamically. Research endeavors have shown that similar literacy concepts exist with blurring borders between the definitions of terms and their goals. For example, Pinto et al. [16, p. 464] define IL as “the skills, competencies, knowledge and values to access, use and communicate information”. Godwin [17, p. 267] suggests that IL refers to “recognizing appropriate information, collaborating, synthesizing and adapting it wisely and ethically”. These definitions also represent that current attempts of defining IL are diverse and aligned to a specific context such as education [18] or work [12]. The term information literacy is often used in combination with media [19, 20], online [21, 22] or computer [23, 24], as well as using the term of IL skills [25] and IL competencies [26]. Existing IL models are usually described with corresponding tasks (e.g., search information, use information, etc.). This ‘task perspective’ is the predominant approach in studies about information literacy and can be linked to the life cycle model of information management of Krcmar [27]. This model contains five views: managing (1) demand for information, (2) information sources, (3) information resources, (4) supply of information, and (5) application of information and is often used as the structural basis for related research.
Traditional models on IL mainly ignore specific features and characteristics of social media. One of the few conceptual works of IL in the context of social media is the 7i framework [28, 29] that consists of seven sub-competencies: (1) Information needs; (2) Information sources; (3) Information access and seeking strategy; (4) Information evaluation; (5) Information use; (6) Information presentation; (7) Information process & finding reflection [29]. Although numbered, Stanoevska-Slabeva et al. [29, p. 9] emphasize that there is no strict sequence: “teachers and pupils […] also frequently switched in an interactive manner back and forth among the sub-competences before moving to the next sub-competence in the sequence. For example, if the evaluation of the found information (sub-competence 4) reveals that more information is needed, than the process was rather started from the beginning in order to refine the information needs (sub-competence 1) or went back to refine the information access and retrieval strategy (sub-competence 3).” Although the 7i framework seems to be a comprehensive approach at first glance, one limitation is that it is “only” derived from literature but not developed according to rigor construct development procedures.
Another more holistic approach is the concept of metalitaracy. According to Jacobson and Mackey [13], metaliteracy is envisioned as a “comprehensive model for information literacy to advance critical thinking and reflection in social media, open learning setting and online communities” [13, p. 84] and expands the standard literacy concept. This new term unifies related literacy concepts (e.g., media literacy, visual literacy etc.) and provides a so-called meta-perspective because current environments are much more social, open, multimodal and enriched with media combined with collaborative functionalities. Jacobson and Mackey [13] apply their model to teaching including the following elements: (1) Understand Format Type and Delivery Mode, (2) Evaluate User Feedback as Active Researcher, (3) Create a Context for User generated Information, (4) Evaluate Dynamic Content Critically, (5) Produce Original Content in Multiple Media Formats, (6) Understand Personal Privacy, Information Ethics and Intellectual Property Issues, and (7) Share Information in Participatory Environments [13, p. 87], which is again in line with the general structure of information management suggested by Krcmar [27]. However, similar to the case of the 7i framework, a rigor measurement construct for metaliteracy is not suggested, which constitutes a barrier of empirical investigations in this regard.
3 Construct Development
Based on the roots of construct development [30], the most recent guideline with a focus on IS is the paper of MacKenzie et al. [15]. Compared to other approaches which have been applied recently [31, 32], one of the key benefits of the guideline of MacKenzie et al. [15] is a comprehensive description of how to develop an appropriate conceptual definition of the focal construct (before starting with content validity checks that is often the first step in other guidelines). This is very important for our project as we find that current definitions of information literacy do not reflect the dynamic character of information. Thus, the development of a clear and concise updated definition of information literacy is the basis of our study. Furthermore, MacKenzie et al. [15] discuss often underutilized but useful techniques for providing evidence, e.g., regarding content validity. Therefore, we have selected the guidelines of MacKenzie et al. [15] as our core paper and their suggested steps serve as the basis for our study.
3.1 Conceptualization (Step 1)
Step 1 refers to a summary of factors authors should consider in the conceptualization phase, based on a literature review of previous theoretical and empirical research as well as a review of literature with a focus on related constructs. To get an overview of current measurements of social media skills and competencies in the fast growing body of social media literature, we conducted such a literature review and applied the guidelines proposed by Webster and Watson [33]. To define the scope at the beginning of the review, an initial explorative search using Google Scholar and other scientific databases (i.e., Scopus, ScienceDirect, ACM, Proquest, JSTOR and EBSCO) was conducted to find out about current approaches of how to measure social media skills or competencies and to identify them through the appropriate search query. After reading and analyzing initial search results and testing several combinations of keyword on Scopus, we conducted a search query with the following string:
information literacy AND social media OR construct* OR measure*
To include the full range of publications, we applied the truncation (asterisk character) at the keywords construct and measure and considered all variation of these terms such as plural or verb forms. A portfolio of 88 core articles were retrieved to reach the ultimate goal of the process which is to develop a conceptual definition of the construct. The retrieved papers are part of a very heterogenous academic field ranging from educational and bibliographic studies to articles dealing with work. We did not narrow down the scope of literacy initially, but considered articles that refer to aspects such as metaliteracy, transliteracy [34], or reading literacy [35]. An individual analysis of all articles followed for the purpose of identifying literacy definitions used by the various authors. We could identify information literacy definitions made by authors or references to such definitions given in other papers for 59 publications. We proceeded with an iterative word-by-word analysis of the definitions to extract 23 major keywords in the first iteration and 18 in the second iteration that were repetitively used. We ultimately condensed these keywords to certain clusters that describe the treatment of information in general or with special regard to social media, characterize the treatment, or address influencing factors. Based on the clustered keywords, we finalize the first step (conceptualization) of the agenda of MacKenzie et al. [15] by giving the definition displayed in Fig. 1.
We followed the objective of a concise definition and, hence, aggregated common terms to clusters, i.e., we combined related terms like select information and retrieve information to the topic of obtaining information. Communication of information in the context of SMIL is understood as a unidirectional, but also a bidirectional exchange of information with the help of social media services. Signature examples would be a tweet and response on Twitter, or comment and reply below a YouTube video. The literature review suggests that re-evaluation should be perceived as a self-contained cluster that can be differentiated from evaluation by adding additional interactive exchange between users and integration of their feedback.
3.2 Development of Measures (Steps 2 and 3)
The first set of items stems from two sources. First, we screened the literature we had selected for the literature review (see 3.1) for measurement items. Second, we derived further items from our own SMIL definition which is in line with the recommendations of MacKenzie et al. [15]. In total, we generated 40 items.
According to the procedure described in the core paper, we then aimed at analyzing content validity of the items. Most content validity checks are of qualitative nature, e.g., interviews. However, a limitation lies in the very subjective answers of such approaches. We therefore applied the more quantitative approach of Hinkin and Tracey [36] in which a rater has to assess the ‘fit’ of each item to the components of a construct; in our case the eight SMIL components. Given the 40 items, this is a very inconvenient approach for the raters, as it results in 320 decisions (40 items times 8 SMIL components) to be made by one respondent. Similar to Hinkin and Tracey [36], we conducted the survey as part of official lectures in master’s programmes at our affiliation (a business school located in Germany) to ensure enough time for responding. By this, we were able to collect 79 completed surveys which is above the recommended benchmark of n = 50 [36] for this type of content validity check.
We then applied one-way repeated measures ANOVA calculations to assess whether the eight steps are significantly different from each other for each item. We found for 14 out of the 40 items that there were steps not significantly different from another step. These cases were discussed among the authors of this paper, and with further department members during a research seminar. Based on this, we decided to rephrase some of the items to improve clarity as suggested by Wieland et al. [37], and, in turn, improve the content validity of our set of items. Table 1 lists the resulting set of 40 items.
According to MacKenzie et al. [15], the next, fourth step would be specifying the research model. Because of the limited space of a research-in-progress paper, we decided to integrate the illustration of the model specification (step 4) with the data collection (step 5) and scale evaluation and refinement (step 6). We thus present the final model in the end of Sect. 3.3 highlighting the initial model of step 4 and the changes made during the scale evaluation.
3.3 Scale Evaluation and Refinement, and Model Specification (Step 5 and 6)
Decisions about the investigated sample are especially crucial for testing a newly specified model [15]. For consistency, we decided to address a larger number of participants with similar characteristics compared to the initial target group of master students we asked during the item generation step. But we expanded the target area to a second UK-based business school to gather a data set of students with even more heterogeneous backgrounds, which allows us to control for cross-cultural differences. This data set of 96 valid responses account for approximately one-third of the overall sample for this stage. The remaining two-thirds (n = 186 valid responses) were collected with the help of the Amazon Mechanical TurkFootnote 1 (MTurk) crowdsourcing platform, making our overall sample more diverse by adding answers from various countries around the world such as the USA and India. After reliability checks, the final sample we could use for the scale evaluation consists of 282 participants with an average age of 34.09 years. This is a sample size large enough to perform an exploratory factor analysis (EFA) which is suggested by MacKenzie et al. [15] as a suitable method to evaluate an item scale. The 160 male and 114 female respondents—8 decided not to disclose their gender—needed 8.42 min to fill in the questionnaire that was distributed.
We again asked to rate the 40 items (see Table 1) on the same 5-Point Likert scale and transferred the raw data to IBM SPSS for further analyses. The main focus lay on the EFA calculation that included all 40 item ratings of all 282 responses. This method can be used to associate a number of correlated and measured items to superordinate factors. We initially used Eigenvalues of 1 or above as the standard threshold for factor extraction and rotated the factors using the Varimax method. This approach revealed some rather unprecise item-factor relations though with only some clear extracted factors. Following our conceptualization, we could expect eight unique factors, but based on the Eigenvalues, we received only seven factors. However, relevant indicators such as a good KMO value of .949, a significant Bartlett test, communalities of 0.464 for the weakest items or above, and ultimately 58.22% of the variance explained indicated a reasonable model specification. Therefore, we proceeded with the refinement as referred to in step 6 of the core paper.
First, we fixed the number of factors to be extracted to eight as operationalized in the model. The variance explained raised to 60.63% accordingly. Second, we iteratively eliminated items with low overall factor loadings, including two items associated with search information, SEA1 and SEA5, COMM4 (communicate information), or REVAL1 (re-evaluate information). Second, we further deleted items from the set that revealed high cross loadings towards multiple factors, e.g. CREAT1 and CREAT2 linked to create information. Finally, we decided to eliminate at this point all three items that are supposed to form obtain information due to unsolvable cross loadings with other items that form other factors. This led to a reduced item set with 19 out of 40 initial items at this stage of our research. The figure below (Fig. 2) contains all 40 original items and their factors as our conceptual SMIL model and—highlighted in different shapes—those eliminated during our EFA calculation.
3.4 Discussion of the Developed Model
Our results allow two key interpretations based on the initially derived model. The EFA results correspond with the forecasted factors to a certain extent as seven of eight factors could be identified. However, the results also reveal some cross loadings suggesting interrelations of items between factors. As a prominent example, all three items forming obtain information show rather weak loadings overall, and they load on three different factors. Thus, we decided to eliminate the entire factor and its associated items for now. As one next step though, we plan to reanalyze the cross loadings to integrate these items to potential rearranged superordinate factors.
EFA results allow additional in-factor interpretations as individual analyses per factor items yield two sub-factors instead of one main factor. Considering the example of evaluate information, items EVAL1 and EVAL4 form one sub-factor whereas EVAL2, EVAL3, EVAL5 and EVAL6 form another. Content-wise, the first sub-factor can be linked to a level of higher abstraction, i.e., general evaluation of the information itself and its quality. In contrast, the second sub-factor refers to a more detailed level of evaluation such as credibility, accuracy, or the specific focus on fake news. Similarly, communicate information revealed two distinctive sub-factors instead of a single factor. One can be summarized as communication actions referring to information handling (COMM1 & COMM2), e.g., “display” or “share” information, and the second sub-factor addresses the communication and interaction with a recipient (COMM3 & COMM6), e.g., providing “feedback” or “constructive criticism”. These findings pave the way for extending the originally postulated model with second order constructs.
4 Contributions, Implications and Future Work
We provide a construct which solves the issue of a more or less dispersed topic by providing rigor in the design and development of the measurement scale for information literacy in social media. Existing approaches dealing with enhanced information literacy concepts such as metaliteracy [13, 14] or the 7i framework [28, 29] do not cover a rigor measurement construct development. Our scale is valid for every individual social media user; both in the private and business context.
Second, we introduce new ways of mapping our construct to existing theoretical concepts in the field of IS research. A natural link exists towards the life cycle model of information management of Krcmar [27]. More specifically, considering the research stream of technology acceptance, experiences are often solely linked to the time users spend to work and familiarize with a system, e.g., in TAM2 [38], UTAUT [39] or its successor UTAUT2 [40]. In these models, experience is measured simplistically with a tripartite ordinal scale. Our scale, however, also contributes in a new and unexpected, but more eligible way to the measurement of experience in the social media environment. Information literacy of social media users can serve as a novel criterion of experience in addition to, exclusively, time.
Initial practical implications can be derived as well from our construct development. From the perspective of several stakeholders, our research can contribute to their decision taking. For example, educational institutions such as universities can use it to adjust and optimize the measurement of students’ digital competencies (SMIL) [41]. Additionally, companies in general can capitalize from the construct in a similar way as it gives them an instrument, they can use to identify potential fields for employee trainings.
Intended future applications of our SMIL model are primarily related to research on fake news. There are several potential ways of applying the construct on individuals to assess their competencies in social media services to inform about possible threats, impacts and create the awareness of misleading information in those networks. This is in line with the call of Lazer et al. [6] to empower individuals to deal with fake news. Empowering requires understanding of the status quo (i.e., the individual SMIL level), and our model paves the way for future ways of its measurement. Upcoming research projects could likely integrate the SMIL construct into holistic models addressing, amongst others, work-related performance categories [42]. Moreover, the developed scale informs existing roles on different levels (e.g., social media managers) in the business context. Related questions could ask for a minimum SMIL level for a specific occupation or role. At the same time this includes the measurement and assessment of success for teaching social media literacy practices to students [29], e.g., when investigating how curricula respond to the call of Fichman et al. [43] to support students in understanding “how social media works”.
Notes
- 1.
Accessible at https://www.mturk.com.
References
Kaplan, A.M., Haenlein, M.: Users of the world, unite! the challenges and opportunities of social media. Bus. Horiz. 53, 59–68 (2010)
Bühler, J., Bick, M.: Name it as you like it? keeping pace with social media something. Electron. Markets 28, 509–522 (2018)
Baur, A.W., Lipenkova, J., Bühler, J., Bick, M.: A novel design science approach for integrating Chinese user-generated content in non-Chinese market intelligence. In: Proceedings of the 36th International Conference on Information Systems (ICIS), Fort Worth (TX), USA (2015)
Mikalef, P., Sharma, K., Pappas, I.O., Giannakos, M.N.: Online reviews or marketer information? an eye-tracking study on social commerce consumers. In: Kar, A.K., et al. (eds.) I3E 2017. LNCS, vol. 10595, pp. 388–399. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68557-1_34
Allcott, H., Gentzkow, M.: Social media and fake news in the 2016 election. J. Econ. Perspect. 31, 211–236 (2017)
Lazer, D.M.J., et al.: The science of fake news. Science 359, 1094–1096 (2018)
Fouquet, H., Mawad, M.: Macron Plans to Fight Fake News With This Law. https://www.bloomberg.com/news/articles/2018-06-06/macron-fake-news-bill-shows-challenges-of-misinformation-fight
Nicola, S.: How Merkel Is Taking on Facebook and Twitter. https://www.bloomberg.com/news/articles/2018-01-04/how-angela-merkel-is-taking-on-facebook-twitter-quicktake-q-a
Hennig-Thurau, T., Gwinner, K.P., Walsh, G., Gremler, D.D.: Electronic word-of-mouth via consumer-opinion platforms: what motivates consumers to articulate themselves on the Internet? J. Interact. Mark. 18, 38–52 (2004)
Ismagilova, E., Dwivedi, Y.K., Slade, E., Williams, M.D.: Electronic Word of Mouth (eWOM) in the Marketing Context. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-52459-7
Ilomäki, L., Paavola, S., Lakkala, M., Kantosalo, A.: Digital competence – an emergent boundary concept for policy and educational research. Educ. Inf. Technol. 21, 655–679 (2016)
Jinadu, I., Kaur, K.: Information literacy at the workplace: a suggested model for a developing country. Libri 64, 61–74 (2014)
Jacobson, T.E., Mackey, T.P.: Proposing a metaliteracy model to redefine information literacy. Commun. Inf. Lit. 7, 84–91 (2013)
Mackey, T.P., Jacobson, T.E.: Reframing information literacy as a metaliteracy. Coll. Res. Libr. 72, 62–78 (2011)
MacKenzie, S.B., Podsakoff, P.M., Podsakoff, N.P.: Construct measurement and validation procedures in MIS and behavioral research: integrating new and existing techniques. MIS Quart. 35, 293–334 (2011)
Pinto, M., Doucet, A.-V., Fernández-Ramos, A.: Measuring students’ information skills through concept mapping. J. Inf. Sci. 36, 464–480 (2010)
Godwin, P.: Information literacy and Web 2.0. Is it just hype? Program 43, 264–274 (2009)
Lau, W.W.F., Yuen, A.H.K.: Developing and validating of a perceived ICT literacy scale for junior secondary school students: pedagogical and educational contributions. Comput. Educ. 78, 1–9 (2014)
Somabut, A., Chaijaroen, S., Tuamsuk, K.: Media and information literacy of the students who learn with a digital learning environment based on constructivist theory. In: Proceedings of the 24th International Conference on Computers, India (2016)
Austin, E.W., Muldrow, A., Austin, B.W.: Examining how media literacy and personality factors predict skepticism toward alcohol advertising. J. Health Commun. 21, 600–609 (2016)
Allen, M.: Promoting critical thinking skills in online information literacy instruction using a constructivist approach. Coll. Undergrad. Libr. 15, 21–38 (2008)
Peterson-Clark, G., Aslani, P., Williams, K.A.: Pharmacists’ online information literacy: an assessment of their use of Internet-based medicines information. Health Inf. Libr. J. 27, 208–216 (2010)
Punter, R.A., Meelissen, M.R.M., Glas, C.A.W.: Gender differences in computer and information literacy: an exploration of the performances of girls and boys in ICILS 2013. Eur. Educ. Res. J. 16, 762–780 (2017)
Scherer, R., Rohatgi, A., Hatlevik, O.E.: Students’ profiles of ICT use: identification, determinants, and relations to achievement in a computer and information literacy test. Comput. Hum. Behav. 70, 486–499 (2017)
Al-Aufi, A.S., Al-Azri, H.M., Al-Hadi, N.A.: Perceptions of information literacy skills among undergraduate students in the social media environment. Int. Inf. Libr. Rev. 49, 163–175 (2017)
Pinto, M., Fernández-Pascual, R.: Information literacy competencies among social sciences undergraduates: a case study using structural equation model. In: Kurbanoğlu, S., Špiranec, S., Grassian, E., Mizrachi, D., Catts, R. (eds.) ECIL 2014. CCIS, vol. 492, pp. 370–378. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-14136-7_39
Krcmar, H.: InformationsManagement. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-45863-1
Müller, S., Scheffler, N., Seufert, S., Stanoevska-Slabeva, K.: The 7i framework - towards a measurement model of information literacy. In: Proceedings of the 21st Americas Conference on Information Systems (AMCIS), Puerto Rico (2015)
Stanoevska-Slabeva, K., Müller, S., Seufert, S., Scheffler, N.: Towards modeling and measuring information literacy in secondary education. In: Proceedings of the 36th International Conference on Information Systems (ICIS), Fort Worth (TX), USA (2015)
Churchill, G.A.: A paradigm for developing better measures of marketing constructs. J. Mark. Res. 16, 64–73 (1979)
Lewis, B.R., Templeton, G.F., Byrd, T.A.: A methodology for construct development in MIS research. Eur. J. Inf. Syst. 14, 388–400 (2005)
Schmiedel, T., Vom Brocke, J., Recker, J.: Development and validation of an instrument to measure organizational cultures’ support of business process management. Inf. Manag. 51, 43–56 (2014)
Webster, J., Watson, R.T.: Analyzing the past to prepare for the future: writing a literature review. MIS Quart. 26, xiii–xxiii (2002)
Brage, C., Lantz, A.: A re-conceptualisation of information literacy in accordance with new social media contexts. In: The 7th International Multi-Conference on Society, Cybernetics and Informatics (IMSCI), pp. 217–222. Orlando, FL, USA (2013)
Fahser-Herro, D., Steinkuehler, C.: Web 2.0 literacy and secondary teacher education. J. Comput. Teach. Educ. 26, 55–62 (2010)
Hinkin, T.R., Tracey, J.B.: An analysis of variance approach to content validation. Organ. Res. Methods 2, 175–186 (1999)
Wieland, A., Durach, C.F., Kembro, J., Treiblmaier, H.: Statistical and judgmental criteria for scale purification. Supply Chain Manag.: Int. J. 22, 321–328 (2017)
Venkatesh, V., Davis, F.D.: A theoretical extension of the technology acceptance model. Four Longitudinal Field Stud. Manag. Sci. 46, 186 (2000)
Venkatesh, V., Morris, M.G., Davis, G.B., Davis, F.D.: User acceptance of information technology: toward a unified view. MIS Quart. 27, 425–478 (2003)
Venkatesh, V., Thong, J.Y., Xu, X.: Consumer acceptance and use of information technology: extending the unified theory of acceptance and use of technology. MIS Quart. 36, 157–178 (2012)
Murawski, M., Bick, M.: Demanded and imparted big data competences: towards an integrative analysis. In: Proceedings of the 25th European Conference on Information Systems (ECIS), pp. 1375–1390, Guimarães, Portugal (2017)
Alessandri, G., Borgogni, L., Truxillo, D.M.: Tracking job performance trajectories over time: a six-year longitudinal study. Eur. J. Work Organ. Psychol. 24, 560–577 (2015)
Fichman, R.G., Santos, B.L., Zheng, Z.: Digital innovation as a fundamental and powerful concept in the information systems curriculum. MIS Quart. 38, 329–353 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 IFIP International Federation for Information Processing
About this paper
Cite this paper
Murawski, M., Bühler, J., Böckle, M., Pawlowski, J., Bick, M. (2019). Social Media Information Literacy – What Does It Mean and How Can We Measure It?. In: Pappas, I.O., Mikalef, P., Dwivedi, Y.K., Jaccheri, L., Krogstie, J., Mäntymäki, M. (eds) Digital Transformation for a Sustainable Society in the 21st Century. I3E 2019. Lecture Notes in Computer Science(), vol 11701. Springer, Cham. https://doi.org/10.1007/978-3-030-29374-1_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-29374-1_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-29373-4
Online ISBN: 978-3-030-29374-1
eBook Packages: Computer ScienceComputer Science (R0)