Abstract
Purpose
Cyberstalking is a growing threat to society, and policymakers should address it utilizing the input of constituents. For this, two key components are required: actionable objectives informed by the values of society and the means of implementation to maximize their potential benefits. The process should be guided by the constituent's values, requiring the elicitation of intrinsic values as individual preferences that are extrapolated to society at large.
Design/methodology/approach
The authors utilize Keeney's (1990) public value forum and Sen's (1999) social choice theory (Sen, 1999) to elicit and convert these intrinsic values to serve as the basis for developing public policy to prevent cyberstalking.
Findings
The results demonstrate a strong desire by participants to have clear regulations, policies and procedures developed in concert with industry and enforced by the government that elucidate required protections against cyberstalking in combination with strong technical controls. These policies should guide technical control development and implementation, but leave ultimate control in the hands of technology users to decide what controls they want to utilize.
Originality/value
This study is the first to utilize Keeney's (1988) public value forum in the context of cyberstalking to develop quantitative measures regarding technology users' desired cybersecurity protections against cyberstalking. The authors provide a decision-making framework for policymakers to develop a new policy based on the input of their constituents in a manner that maximizes their potential utility and ultimate benefit.
Keywords
Citation
Smith, K. and Dhillon, G. (2023), "Re-framing the policy conversation for preventing cyberstalking", Internet Research, Vol. 33 No. 1, pp. 84-123. https://doi.org/10.1108/INTR-03-2021-0169
Publisher
:Emerald Publishing Limited
Copyright © 2022, Emerald Publishing Limited
Introduction
Cyberstalking has emerged as a growing threat to society as a whole, particularly with the proliferation of the Internet and social media. As a threat, cyberstalking enables perpetrators to use electronic devices to stalk persons in cyberspace, invading personal privacy and threatening individuals with the intent to cause fear and panic (Dhillon and Smith, 2019). While academic fields such as sociology, criminology and psychology may provide slightly differing definitions of cyberstalking, this paper is focused on public policy and, thus, we use the legal definition set forth by the United States government. The United States government considers cyberstalking to be “the use of the internet, email, or other electronic communications devices to stalk another person” (United States Attorney General, 1999). In 2020, the number of complaints of cyberstalking received by the FBI's Internet Crime Complaint Center rose 69%, to nearly 800,000 (Green, 2021). Further yet, in a stalking victimization report for the United States Department of Justice by Truman and Morgan (2021), they note that over 1.3 million people reported being cyberstalked annually. Compared to the 300,000 individuals who reported being stalked in the traditional sense, this is a massive increase in frequency. For this reason, cyberstalking has been deemed a danger to society and both governments and institutions must take steps to prevent this harmful behavior through the implementation of effective public policy.
While the threat of cyberstalking has generated a large degree of interest within the academic and practitioner community, both the necessary legislation and policy are lacking (Castaños, 2016; Chung, 2017; DeMatteo et al., 2017; Dhillon and Smith, 2019; Knight, 2014; Leahy, 2017; Marshak, 2017). One reason given for this current lack of comprehensive policy is that most federal policy leaves it to the states to make decisions regarding cyberstalking policy (Leahy, 2017; Marshak, 2017). While current legislations, such as the Violence Against Women Act of 1994 and 47 U S C. § 223 of the United States Federal Anti-Cyber-Stalking law, exist as legal mechanisms for addressing cyberstalking, these laws leave a great deal of issues related to the problem of cyberstalking unanswered and the remainder of legislative efforts fall to the states to address without any clear federal policy direction to guide the application of such state-level legislation. Further, United States Federal law often used to attempt to prosecute cyberstalkers, such as 18 U S C. 875(c), makes it a federal crime, with a punishment of up to five years in prison and a fine of up to $250,000, to transmit any communication in interstate or foreign commerce containing a threat to injure the person of another. Section 875(c) applies to any communication actually transmitted in interstate or foreign commerce, including threats transmitted in interstate or foreign commerce via the telephone, e-mail, beepers, or the internet. However, absent a communicable threat, this law does not apply and cannot be used to prosecute a crime such as cyberstalking.
As previously noted, in many states little protection exists for victims and means of prosecuting offenders for cyberstalking (Chung, 2017; Leahy, 2017). A number of reasons are noted, such as the difficulty in pinpointing the location of the cyberstalker as well as the ability to identify said cyberstalker due the anonymity the Internet provides (Leahy, 2017). Worse yet, Chung (2017) states that cyberstalking often acts as a potential precursor to violent crimes and as such a pressing need exists for new specific cyberstalking policy and legislation. However, current solutions tend to rely on the contortion of traditional stalking legislation and policy. Therefore, governments and institutions, attempting to solve problems such as cyberstalking, are making important decisions regarding the specific outcomes related to their policy efforts. These decision-makers should attempt to maximize the social value of their efforts and reach a successful outcome, by making decisions in a socially conscious manner and maximizing input from key stakeholders (Dhillon and Smith, 2019; Tversky and Kahneman, 1986; Sen, 1999). We argue that such a goal is accomplished by using key affected stakeholders to review potential solutions as well as the means of implementing such solutions. By performing these actions, consensus is built amongst the general public affected by these policy and legislation making efforts (Keeney et al., 1990; Keeney, 1996).
To accomplish this task, we first require a set of actionable objectives. In the work by Dhillon and Smith (2019), 5 fundamental objectives and the defining attributes are presented for preventing the problem of cyberstalking. While actionable objectives are a necessary starting point for enabling the creation of effective public policy, it is also important to define a means of enacting them in order to ensure successful implementation. Doing so provides two important benefits for our current work. First, it will allow governments and institutions to target finite resources at the most important fundamental objectives, maximizing their positive impact on society. Second, it will facilitate the successful implementation of cyberstalking policy by reducing societal resistance as those most affected are involved in the policy creation process (Herley, 2009). As governments and institutions only possess limited resources and a finite amount of time and energy, understanding the best means for prioritizing and implementing these objectives allows them to maximize their benefits.
Therefore, our research uses the 5 fundamental objectives derived by Dhillon and Smith (2019) as the basis for exploring how to best implement these actionable objectives for the successful prevention of cyberstalking through the use of effective public policy. We further incorporate Keeney's (1988) public value forum, informed by social choice theory (Sen, 1977, 1999), to develop theoretically grounded implementation scenarios. These scenarios are then used to rank and weight each objective to measure their perceived impact in the form of maximized utility. By bounding the interpretation of decisions by the stakeholders in this research within a socio-normative theoretical framework, we place the decision-making process of individuals and groups developing public policy solely in the context of cyberstalking prevention at a societal level. Additionally, the use of social choice theory provides a normative interpretive mechanism which specifically deals with aggregation of individual choices to make group-based decisions as a result of cohesive societal values. Hence, it can be used to elucidate understandings of group preference related to cyberstalking prevention. Through this process, we provide a normative contribution to the academic literature, illuminating a process by which complex social issues can be addressed and serve as the basis for the institutional decision-making process when developing public policy.
Literature review of cyberstalking policy
Within the academic literature there are three distinct groupings of research which inform this work. First, research that relates to the nature and process of public policy formulation, specifically the value driven method utilized in this work. Second, the literature that details traditional stalking and cyberstalking legislation and the lack of relevance for using co-opted stalking legislation to solve the problem. Third, research dealing explicitly with cyberstalking policy is explicated and calls for additional research noted. In combination, these three distinct bodies of literature form the basis for establishing the need for research directed at furthering the development of cyberstalking-specific legislation and policy.
Public values in policy-making
The first body of research that informs our work deals with how to incorporate public values into the policy-making decision process. In it, suitable justification exists that demonstrates that this practice has long been acceptable in the realm of academia in situations where the public's opinion is intended to drive policy creation and implementation (Dhillon et al., 2016; Keeney and Palley, 2013; May et al., 2013; Smith et al., 2018, 2021a, b; Witesman and Walters, 2014). The reason for this is that public opinion is driven by the inherent values of the collective individuals and can be useful in creating policy that is not only effective, yet also accepted by those affected through its implementation (Keeney, 1996; Dhillon et al., 2016; Dhillon and Smith, 2019; Smith et al., 2018, 2021a, b). Due to these considerations, public values are an important consideration within policy decisions and should be incorporated into the decision-making process despite being a difficult task (Dhillon et al., 2016; Dhillon and Torkzadeh, 2006; Smith et al., 2018, 2021a, b). While cyberstalking is a relatively new concept a great deal of research has been conducted with respect to the characteristics of cyberstalkers (Cupach and Spitzberg, 2001; Kaur et al., 2021; Spitzberg et al., 1998; Spitzberg and Rhea, 1999; Todd et al., 2021) as well as the legal elements that must be evaluated (DeMatteo et al., 2017; Goodno, 2007; Hazelwood and Koon-Magnin, 2013). Yet, little work has been done to date to elicit public values regarding this phenomenon to inform policy decisions that aim to prevent its occurrence. For example, Dhillon and Smith (2019) used Keeney's (1996) value-focused thinking methodology to elicit public values regarding cyberstalking prevention, providing a normatively derived set of objectives that could be examined and applied to the policy-making process. This is an important contribution to the cyberstalking literature as it examines the individual perspective and extends them into the realm of public policy development, ensuring key stakeholders' input is considered. Likewise, Smith et al. (2021a) used a similar process to develop public policy around Internet of Things (IoT) and user privacy and security.
While these public values are important and can aid in making policy decisions, it is still unclear how policymakers should interpret public values in a specific policy context (Dhillon et al., 2016; Dhillon and Torkzadeh, 2006; Coss et al., 2019; Dhillon and Smith, 2019; Smith et al., 2021a, b; Keeney et al., 1990; Keeney, 1996). According to Keeney (1988, 1996) this includes things such as how public values should be operationalized, what role the experts and their values should have, and how expert recommendations and value interpretations should be combined in the decision-making process for policy creation. It should be noted that as policy issues become more complex and the policy context increases in scope, the problem domain likewise sees an increase in complexity (Dhillon et al., 2016; Keeney, 1996). However, several approaches exist that can shed light and help to clarify public values in complex policy problems such as surveys, indirect and direct value elicitation, focus groups and public involvement (Keeney et al., 1990, 2013). One important mechanism, the public value forum, serves to explain the role of each objective derived from public values, guiding the policy-making process on how to operationalize them in a given context and also facilitates the examination of the role(s) experts should play in guiding decision-making processes. In his seminal paper, Keeney (1990) used the public value forum to elicit the public values of German citizens for setting long-term energy policies for their country. This process consisted of combining the informal “layperson” assessments of the German public with what was termed “factual” expert assessments. It was concluded that the process was long, difficult and expensive, yet offered tremendous insights into solving the problem.
Prior cyberstalking legislation and policy research
The second body of academic literature that informs this work, based in the US, has begun to increasingly focus on the need for updating policy and legislation in light of the new threats posed by cyberstalking. Work in this area tends to exist mostly as either a review or critique of existing legislation or policy (Chung, 2017; DeMatteo et al., 2017; Kaur et al., 2021; Knight, 2014; Leahy, 2017; Marshak, 2017; Todd et al., 2021) can act primarily as a call to action (Castaños, 2016; Chung, 2017; DeMatteo et al., 2017) in light of the threats posed to society by cyberstalking. While this research is important and seeks to address the complex issue of cyberstalking, few tangible solutions are proffered with respect to actually preventing it.
For example, when reviewing the academic literature, both Chung (2017) and Leahy (2017) review the individual legislation and policy regarding cyberstalking in the states of Maryland and Illinois respectively. In Illinois, Leahy (2017) finds that the state provides a strong legal doctrine for prosecuting crimes of traditional stalking and has taken action to address cyberstalking as well, yet in many states little protections for victims and means of prosecuting offenders for cyberstalking exists. A number of reasons for this lack of protection and ability to prosecute are noted, such as the difficulty in pinpointing the location of the cyberstalker as well as the ability to identify said cyberstalker due the anonymity the Internet provides (Leahy, 2017). For Illinois, Leahy (2017) states that their ability to deal more adequately with this problem stems from proper definition of and legislation pertaining to cyberstalking. Yet, the legal definition for cyberstalking classifies it as a civil and not criminal offense. Additionally, the interpretation of legal language can be contentious and newer forms of online interactions such as social media have yet to be incorporated. In contrast to Leahy (2017), Chung (2017) finds the legislation as well as the views regarding the handling of cyberstalking to be antiquated and lacking in protections and remedy for the victims of this crime. Chung (2017) states that cyberstalking often acts as a potential precursor to violent crimes and as such, a pressing need exists for the state of Maryland to deal with the new cyber component of stalking specifically, rather than relying on the contortion of traditional stalking legislation and policy.
A study by DeMatteo et al. (2017) found numerous areas of disagreement between public perception and statutory case law exist. For example, they found that public preference is that cyberstalking be treated as a separate offense from stalking, that a threat of violence is not required for behavior to constitute cyberstalking, and that there should be a private civil cause of action for cyberstalking. This is important as the results clearly demonstrate that the public does not consider current policy and legislation sufficient, showing a clear disconnect between public perceptions of cyberstalking and the existing public policy being utilized to solve it.
Calls for additional cyberstalking research
Lastly, additional research focusing on cyberstalking legislation and policy in general make similar calls regarding the need for addressing this problem with new solutions rather than adapting old ones. For example, Castaños (2016) notes that current legislation and policy is deficient, yet offers no tangible solutions to improve and alleviate these deficiencies, whereas Knight (2014) proposes improvements based on the Model Stalking Code guidelines, and Marshak (2017) advocates for training, education and jurisdictional revisions to solve the problem. Yet, of note, not a single proposed solution identities the needs of the potential victims from the actual affected source. While Marshak (2017) and Castaños (2016) may review real cases of cyberstalking to infer possible remedies, these fall short of proposing policy-level solutions nor do they demonstrate a means of implementing those proposed solutions. To do this, it would require understanding the problem from the perspective of those who will be affected and identifying what they find important and how they wish to be protected. In their paper, Dhillon and Smith (2019) rectify this issue by eliciting latent societal norms in the form of values regarding cyberstalking prevention from concerned peoples. While this work presents actionable objectives that can be useful for informing public policy decisions, it does not examine the relative importance of each objective, nor does it propose a means of implementing them to guide the actual policy-making process. Additionally, Kaur et al. (2021) make note regarding existing cyberstalking research; “Despite considerable studies directed at its examination, the current research on cyberstalking is limited by a lack of clarity on its characterization and prevalence, coupled with a fragmented research focus (p.1).” To rectify this issue, Kaur et al. (2021) state, “scholars should consider investigating the effectiveness of intervening conditions, if any, that can be created by online platforms, legal authorities, and civic stakeholders (p. 12).” This is important as Kaur et al. (2021) perform a systematic literature review of the existing cyberstalking literature, exploring the gaps in current research and note areas of need for future research endeavors. Specifically, they call for the exact type of policy-based research developed in this study.
Hence, an important gap exists in the relevant academic research and the need exists for the work found in this study. To this end, this study aims to fill this gap and propose how actionable objectives can be used to drive the creation of effective cyberstalking specific legislation by enhancing the decision-making process of policymakers.
Theoretical framing
The purpose of this section is to explain the use of social choice theory (Sen, 1999), Keeney's (1988) public value forum and Dhillon and Smith's (2019) Fundamental Objectives for the prevention of cyberstalking. We explain how social choice theory acts as a guiding theoretical framework for informing the process of Keeney's (1988) public value forum as well as how Dhillon and Smith's (2019) Fundamental Objectives serve as the basis of our work, providing the key elements for developing implementation scenarios and various alternative choices.
Social choice theory
Social choice theory (Sen, 1999) is used as a theoretical basis to study how individual opinions, preferences, and interests come together for a collective decision. Social choice theory addresses questions such as “How can a group of individuals choose a winning outcome from a given set of options?” and “How can a collective of individuals arrive at coherent collective preferences on some issues, on the basis of its members' individuality?” The theory also allows for the ranking of different social alternatives for the overall well-being of the society, taking an individualistic perspective and then aggregating those preferences and behaviors of the individual members to the larger context of the given society. The means for assessing aggregated individual preferences proceeds from a set of reasonable axioms of social choice to form what can be termed a social value function. A social value function ranks social states as less desirable, more desirable, or indifferent for every possible pair of social states. The inputs of this social value function include any variables considered to affect the well-being of a society.
Therefore, social choice theory depends upon the ability to sum up individual preferences into a combined social value function, demonstrating the aggregate societal preference for a given decision context (Sen, 1999). In most cases individual preferences are modeled in terms of a single-attribute utility function that can then be aggregated into a group utility function. Importantly, an individual's preferences must be measured in the same way for the entire group of participants, using the same scales and contextual understanding. The ability to create a social value function depends heavily on the ability to compare these individual utility functions, termed an interpersonal utility comparison. Sen (1977, 1999) states that the comparability of interpersonal utility need not be complete, as even complete interpersonal comparison of utility could lead to socially suboptimal choices due to the mental states of individuals being malleable. A person's disposition may influence their mental state and cause them to derive high utility from a relatively small social benefit. However, according to Sen (1977, 1999), this should not nullify an individual's claim to compensation or equality in the realm of social choice. Hence, while individuals may perceive more or less benefit from a particular social action than most others, they should be included with equal representation in the aggregated social value function. The overall decision-making outcome is then representative of society's preference for the means in which an action should occur.
Integrating social choice theory
In order to utilize social choice theory as a theoretical framing mechanism for this study (see Sen, 1977, 1999; Tversky and Kahneman, 1986), three key aspects must exist within the methodological process:
A set of alternatives must be defined, which can be a set of objects or set of actions.
Two technical assumptions should be satisfied – transitivity and completeness, allowing for individuals to rank or order their preferences amongst a set of alternatives.
The formal expression of preferences, either as strict, weak or indifferent.
Therefore, we articulate social choice theory by using the public value forum methodology of Keeney (1988) as a means for eliciting and aggregating individualistic preferences. The group utility function produced as an output of the public value forum can be considered the equivalent of a social value function as the public value forum represents the collective preferences for various social states and related variables affecting society in a given context (Keeney et al., 1990; Keeney, 2013). Further, the use of Keeney's (1988, 2013) public value forum enables each individual utility function to be provided equal weighting in the final social value function, satisfying Sen's (1977, 1999) call for social equality in choice. Hence, Keeney's (1988) public value forum enables us to address each of the aforementioned aspects of social choice theory, by allowing us to present individuals with a range of alternatives, constrained within the specific decision context (i.e. the prevention of cyberstalking), satisfying the assumptions of transitivity and completeness, and lastly providing a means of quantitatively demonstrating forms of individual preferences (see Figure 1).
Cyberstalking objectives
For this study we selected Dhillon and Smith's (2019) Fundamental Objectives for the prevention of cyberstalking as they represent actionable objectives aimed at solving the real world problem of cyberstalking. Dhillon and Smith (2019) provide a theoretically grounded normative framework for instantiating value-based objectives as elicited by Keeney's (1996) Value-focused Thinking methodology. They use Nissenbaum's (2004) contextual integrity as a theoretical framework for applying the technique to develop these goal-oriented actionable objectives. These objectives, derived from the latent societal norms of the interviewed participants, represent the theoretical social constructs; norms of appropriateness and norms of distribution (Nissenbaum, 2004). Using contextual integrity enables the examination of the complex social norms regarding the flow of private information in the cyberstalking context, elucidating latent norms of distribution and appropriateness and making them explicit. By systematically interviewing over 100 individuals using Keeney et al.'s (1990) Value-focused Thinking methodology, 20 total objectives based on the underlying norms of distribution and appropriateness relevant to the context of cyberstalking were extracted. The purpose of framing the objectives with this theoretical framework is to ensure that contextual integrity is maintained and in turn ensure cyberstalking will be prevented.
Dhillon and Smith (2019) derived 5 fundamental objectives (See Table 1) that will serve as the basis for the cyberstalking prevention objectives and the alternatives for implementation in this research. Using these 5 fundamental objectives allows us to heed to their call for developing theoretically grounded implementation scenarios and drive the creation of effective public policy aimed at the prevention of cyberstalking that incorporates the affected stakeholder perspective in the decision-making process.
Methodology
The public value forum itself exists as a meeting of members of the general public that can last one to two days and involves between five and twenty-five participants (Keeney et al., 1990; Keeney, 2013). To begin, we identify and select members from the general public to participate in the study, in which Keeney (1988) notes that there are two basic approaches to do so. The first approach is that of the stakeholder approach where groups who have a specific stake in the outcome of any policy decisions are identified and asked to participate in the study. This can be especially useful when covering a controversial topic due to the emphasis on negotiation and conflict resolution (Keeney et al., 1990, 1996, 2013). The second way for selecting study participants for a value forum is the representative approach where members of the public are selected at random which is most useful when little to no knowledge exists about reasonable public values to drive policy decisions (Keeney et al., 1990; Keeney, 1996, 2013). Due to the relatively new nature of the cyberstalking phenomenon, little knowledge currently exists with respect to public values regarding policy decisions, and therefore the representative approach was selected for use in this study. However, a smaller value forum using the first approach consisting as “experts” (persons with a background in security or previous experience in handling cyberstalking in an organization) was conducted for comparative purposes and ultimately combined for a holistic perspective.
Drawing on Keeney's (1988) work as well as recent work by Smith et al. (2021a), each step of the value forum is presented as follows:
Introduction and Motivation: At the onset of the forum, it is necessary to first provide participants with an understanding of the importance of using public value judgments in their decision-making process. Importantly, participants are provided an opportunity to ask any questions regarding clarification of the topic (i.e. cyberstalking), before moving on to stage 2.
Defining Objectives and Attributes: Once sufficient motivation is established, participants are given a value tree in which each objective with their corresponding attributes is clarified. Further, scenarios for the decision-context are also presented and clarified before moving to stage 3.
Ranking and Elicitation of Single-Attribute Utility Functions: In this stage, the elicited levels of the attributes of the participants may not be appropriate reflections of their desirability or utility relative to one another. To account for this, utility functions are also used to demonstrate the relative desirability of a given objective or scenario. The choice of rating method depends on the purpose of the value forum, with most instances requiring only a simple rating method.
Elicitation of Tradeoffs: Understanding participant tradeoffs among a range of varying attributes is important as they express the relative importance of attribute units by defining the exchange rate of one attribute unit vs. another (Smith et al., 2021a). There are numerous methods that can be utilized to elicit tradeoffs, such as swing weighting; however, the choice of the appropriate method depends on the policy context and the purpose of the value forum.
Construction of a Multi-Attribute Utility Model: In this stage of the forum, the tradeoffs elicited in step 4 are converted into weights for the attributes using standard multi-attribute utility techniques (Keeney et al., 1990; Keeney, 1996). In most instances, a multi-attribute utility model is a simple weighted average of the constructed single-attribute utilities. However, a simple additive function may not be sufficient and researchers should perform additional tests to determine if a more complex multiplicative or multi-linear model is required (Keeney, 1996), if results from the additive model appear questionable. If a more complex model is required, additional trade off questions are needed to elicit the additional parameters of the model. Lastly, the multi-attribute utility model, or value model (Keeney, 1996; Akkermans and Van Helden, 2002), is used in combination with the expert evaluations to generate an overall model.
To describe the utility of this model we present a paraphrased version of the explanation provided by Smith et al. (2021a) and consider the following; the fundamental objectives as being O1,…On and m1 as a measure for a fundamental objective O1. The vector m= (m1, m2, …, mn) thus provides a description of a particular path whereby a fundamental objective is delivered. The cumulative value of m acts as a measure (quantitative or qualitative) of the distinctive resources and abilities fitting the decision context (i.e. the prevention of cyberstalking). For the additive case (Keeney, 1996), the overall utility v for any alternative described by m1-mn is shown as utilized by Smith and Dhillon (2020);
Therefore, the process requires four key components; First, actionable objectives that are central to solving the given problem context. Second, feasible scenarios for implementing those actionable objectives. Third, the preferences related to both the objectives and scenarios of those affected by a given outcome for the problem context. Fourth, a normative framing mechanism for contextualizing the results of this research and developing understandings with theoretical and practical implications (Dhillon and Smith, 2019; Smith and Dhillon, 2020).
Design of the cyberstalking prevention value forum
To begin the value forum, non-expert participants (N = 21) were selected as a non-random purposive sample of volunteers. As this research employs both quantitative and qualitative approaches, utilizing qualitative elicitation techniques (focus groups and interview techniques) such as a random sampling procedure is inappropriate for our given methodology (Marshall, 1996; Kelley et al., 2003). Some participants, even those of the general public, will provide richer insights due to their experience and greater understanding of the subject matter (Marshall, 1996; Kelley et al., 2003). As such, we used a targeted non-random sampling technique called purposive or judgmental sampling to select specific members of a target population that will answer the questions in the most productive manner (Marshall, 1996; Kelley et al., 2003). We set out to include a demographic that highlights key characteristics of those most affected by cyberstalking with a broad range of participants to maximize the sample variation. Those characteristics include age, gender, and the understanding of cyberstalking, as those most affected by cyberstalking are between the ages of 18 and 24 and tend to most often be women (Alexy et al., 2005; Burke Winkelman et al., 2015). For this reason, we developed a sample that was intended to highlight these characteristics and was then expanded upon to include additional members of society, increasing the sample variation as much as reasonably possible.
Sample details
The participants ranged in age from 18 to 55, had a split of 60/40 women to men and most of the participants had either been previous victims of cyberstalking or knew someone who had been victimized. Participants were mostly US-born citizens, but several participants were non-United States citizens. However, as a group they were representative of the demographics of any major metropolitan city of the mid-Atlantic region in the United States. All participants were confirmed to possess a baseline understanding of the concept of cyberstalking prior to beginning the public value forum. Prior to admittance to the study, all participants were asked a series of questions regarding their understanding of and experience with cyberstalking such as “What is cyberstalking?” and “Have you ever been cyberstalked before?” Additionally, a sample of experts (N = 5) was selected as a point of comparison to the non-expert participants. The same purposive sampling technique was also utilized for expert selection and we solicited those with policy development and cyberstalking experience. As the purpose of this study is to determine public values regarding the prevention of cyberstalking, an expert panel was not necessary as it can be said all persons are experts with respect to their own values. However, it was intended to offer a point of comparison between general persons of the public and people in a position to make decisions and with a more extensive background on the subject matter. Each expert has at least 10 years of experience in a decision-making capacity (setting company policy, handling harassment/stalking at an organization etc.), they currently directly influence organizational security policy, have at least 5 years of experience in cybersecurity and can be said to possess a greater awareness of the effects of cyberstalking than general public knowledge.
Objective selection and presentation
With participants selected, the objectives for individual assessment must be presented to the group. As previously mentioned, Dhillon and Smith's (2019) Fundamental Objectives for the prevention of cyberstalking were adopted as they represent actionable objectives aimed at solving the real world problem of cyberstalking. With the objectives in place, we move on to develop the implementation scenarios for each objective as well as the ranking structures.
Implementation scenarios and ranking
Using the fundamental objectives, “good” and “bad” scenarios were created along with four alternate scenarios that represented different instantiations of the five objectives based on the understanding of “good” and “bad” in the decision context based on the provided attributes (see Figure 2). A set of alternatives is developed based on the attributes of the fundamental objectives, which provide the basis for a maximum optimal and sub-optimal scenario to constrain the possible number of alternatives for implementation.
A table (see Table 1) representing the five objectives with their attributes was provided to the participants for reference during the remainder of the value forum. Next, participants were given the task of providing objective ranking and weighting, both prior to reviewing the explanations of the objective meanings and then again at the end of the study. The participants were asked to rank the five objectives for the prevention of cyberstalking (See Table 2 for an example), first in order of their perceived importance (1 = Highest, 5 = Lowest). Then, they were asked to review the “good” and “bad” scenarios for each objective and rank the magnitude of change or “swing” between these scenarios for each objective from largest (1) to the smallest (5). Participants assigned a weight that indicated the relative magnitude of a given “swing” with respect to the scenario they rated as having the largest degree of change between the “good” and “bad” scenarios. Next, the objective ranked as 1 is assigned a weight of 100 and the lowest ranking of 5 is given a weight of 0. All other ranks between 1 and 5 receive a corresponding weight from 0 to 100 in decreasing increments by order of rank (Keeney et al., 1990). At the end of the forum, the ranking and weighting process for the objectives is repeated. The purpose of this initial and final ranking and weighting was to determine how, if at all, the perceptions of participants change during the study from their initial impression. Keeney (1988) noted that the public value forum process had the unintended benefit of enhancing participants' understanding of their own values and could result in changes in the ordering of objectives.
In addition to the ranking and weighting of the objectives, participants also examined scenarios labeled A, B, C and D which expressed different potential real-world instantiations of the cyberstalking prevention objective. These scenarios were juxtaposed with the “good” and “bad” scenarios and participants were asked to rank them in order of preference with respect to the “good” and “bad” scenarios, with their most preferred scenario receiving a 2 and least preferred receiving a 5. The “good” scenario was rated as 1 and bad as 6 in order to provide a conceptual basis of understanding in providing scenario preference ranks by the participants. After, the participants were then asked to give an importance weight (Keeney et al., 1990), which again was assigned 100 and 0 respectively. This allowed participants to demonstrate how close they felt the respective scenarios came to the conception of “good” and “bad” beyond simply ranking them (i.e. while a scenario may have ranked 2, a participant may have felt it was only 50% of the way to being a “good” implementation of the objectives). Once completed, participants were then asked to rank each instance of the scenarios A, B, C and D by the individual objective.
During the evaluation of the overall scenarios, participants may have been forced to select an overall scenario ranking based on only a few aspects of a given scenario which they assigned more importance to than one or more other parts (i.e. participants may have ranked scenario C as the most overall preferred yet only felt one objective C scenario was most preferable). In order to determine if participants may have actually preferred a differing implementation of each objective by scenario, they were asked to rank each scenario individually by the objective (see Appendix). This allowed participants to, for example, select scenario C for the objective Ensure Technical Security (ETS) as their most preferred while also being able to select scenario A for the objective Increase Cyberstalking Security Procedures (ICSP) as their most preferred. Participants were also asked to assign an importance weight to each ranking, using the same scale as before, with respect to how close to a “good” implementation each scenario was represented. This demonstrated how preferred each individual scenario was to each participant and how relative to “good” each scenario was as well. After the entire scenario ranking and weighting was accomplished the participants were finally asked to re-rank and weight the overall objectives as previously stated.
Hence, the public value forum is a three-phase process (See Figure 3) designed to elicit preferences for:
Value-based fundamental objectives aimed at the prevention of cyberstalking
Overall cyberstalking prevention scenarios
Each individual's preference for the application of each given implementation scenario by fundamental objective.
Results of the public value forum
After data collection from both non-expert and expert participants is completed, an analysis was conducted and the results are discussed in the following sections.
Initial importance rank, swing rank and swing weight data for non-experts
To begin, each of the five objectives were defined for the participants and they were asked to rank them in order of importance (See Table 3). The study found that in the initial rankings of the objectives, participants provided ETS and ICSP with the highest overall median ranks of 2, while the objective Protect Online Interaction (POI) was assigned a median rank of 3 and Develop Strong Value System (DSVS) and Define Intermediaries to Prevent Cyberstalking (DIPC) were assigned median rankings of 4. Based on these initial rankings, participants, with only a definitional understanding of the objectives, clearly rate technical and procedural prevention measures highest in importance in the prevention of cyberstalking.
Next, participants assigned swing ratings for each objective based on the “good” and “bad” scenarios provided, which revealed that the difference between “good” and “bad” scenarios for each objective were likewise rated highest in the technical and procedural objectives with ETS and ICSP each receiving a median swing rank of 2. Interestingly, it was found that participants also found DIPC had a large change from “bad” to “good” and provided a median rank of 3, while POI and DSVS were given median ranks of 4. This seems to indicate that participants felt that not only were ETS and ICSP very important overall, the swing between “good” and “bad” implementations was likewise the largest contrast. With the weights (Keeney et al., 1990) for each swing ranking, participants were asked to demonstrate how drastic the change between “good” and “bad” scenarios was for each objective. This provided an astounding result that demonstrated how important the objectives were as ETS and ICSP received mean weights of 83.19 and 74.33 respectively (out of 100), while DIPC received 65.05, POI 56.95 and DSVS a mere 40.62 mean weight. This would lead one to conclude that if faced with limited resources a strong focus on technical and procedural prevention objectives might address the most pressing concerns of engaged users as these objectives were not only rated the highest, yet also weighted most heavily by participants as having the largest degree of impact between a “bad” and “good” implementation.
Final importance rank, swing rank and swing weight data for non-experts
After the overall and individual scenario ranking and weighting was completed, participants were then asked to re-evaluate their prior objective ranking and weightings to determine whether after seeing potential real-world implementations of the objectives their perceptions towards them had changed (See Table 4). It was found that the overall importance rankings stayed relatively similar, with ETS and ICSP retaining median rankings of 2; however, POI rose in its median ranking to 2 from 3, while DSVS and DIPC stayed the same with a median ranking of 4 for each. Swing rankings for each objective changed the most dramatically as it appears that seeing proposed instances of implementation enhanced participants' understanding of the objectives. Median swing ranks for ETS and ICSP remained at 2, while POI moved up from 4 to 3, DIPC stayed at 3 and DSVS fell from 4 to 5. To highlight the magnitude of these changes, POI initially had a mean weight of 56.95 its mean weight on the final evaluation rose to 72.67, while the fall of DSVS from 4 to 5 in swing rank did not result in such a large change (40.62–40.24 mean weight). DIPC saw a large drop in mean weight from 65.05 to 52.19 even though it retained the swing rank of 3.
This final recap of Importance Rank, Swing Rank and Swing Weight is useful because it re-emphasizes the importance of the technical and procedural objectives as well as highlights the impact “good” and “bad” implementations of each objective has in the public perception of the prevention of cyberstalking. Further, a deeper understanding of POI revealed that when participants were provided with real-world examples of scenarios illustrating the objectives, the protection of their online interactions received more importance and the difference between “good” and “bad” scenarios was viewed as much greater than they initially perceived. This finding revealed the importance of both creating a comprehensive understanding of the concept of the cyberstalking prevention objectives as well as how real-world instantiations of each objective can impact the public's conceptions of an objective's importance with respect to “good” and “bad” implementations.
Scenario selection preference for non-experts
The results from this portion of the study (see Table 5) provided insight into public preferences with respect to the actions a government body or organization should take in the prevention of cyberstalking. The study found that participants heavily favored scenario C with a median rank of 2 and a mean weight of 79.19, demonstrating that they found an option where the technical tools and procedures exist and can be turned on or off at the preference of the user and that some level of regulation is preferred in order to ensure the adequacy of these prevention methods. By contrast, scenario D was the least preferred of all the scenarios receiving a median rank of 5 and a mean weight of 45.19, indicating that participants did not find a “hands off” approach appealing. This approach leaves the vast majority of the responsibility for prevention in the hands of the user. This contrast is very important for government organizations as it demonstrates that users clearly want mechanisms in place that work to prevent cyberstalking. However, participants indicate they prefer to maintain a level of control and discretion over the exact use and implementation of those prevention mechanisms as opposed to having mechanisms forced upon them.
Individual scenario selection by objective for non-experts
Lastly, participants were asked to rank in order of preference for each scenario by the objective. This was done to assess whether participants may prefer different methods of cyberstalking prevention implementation based on the given objective. The results (see Table 6) from this method of individual scenario selection and preference indicate that, generally scenario B and C, are the preferred choices for objective implementation with scenario B leading scenario C in every objective except POI. This is interesting to note as scenario B in POI calls for a high degree of mandatory regulation which could have caused participants who would likely have preferred scenario B overall to instead select the less restrictive scenario C and rank it higher in a holistic situation. Individually, scenario B is generally more preferred than scenario C and are clearly the two most preferred scenarios overall for the cyberstalking prevention objectives. This is still in line with the original inference from the previous section where it appears participants still prefer a degree of autonomy in the final implementation of these objectives. They further indicate a preference for a degree of regulation and law to enforce the existence of procedures, guidelines and technical controls. Based on these results it would be reasonable to suggest that users would prefer an “opt-out” method where the existence of cyberstalking prevention tools were enforced in law, yet the ultimate use by consumers was at their own discretion.
Expert value forum results
Initial importance rank, swing rank and swing weight data for experts
Similar to the non-expert participants in the value forum, the expert forum began by having each of the five objectives defined and then they were asked to rank them in order of importance (See Table 7). In the initial rankings of the objectives, expert participants provided ETS and ICSP with the highest overall median ranks of 1 and 2 respectively, while the objective POI was assigned a median rank of 3 and DSVS and DIPC were assigned median rankings of 4. Based on these initial rankings, experts much like the non-expert participants, clearly rate technical and procedural prevention measures highest in importance in the prevention of cyberstalking. Next, expert participants assigned swing ratings for each objective based on the “good” and “bad” scenarios provided. This revealed that the difference between “good” and “bad” scenarios for each objective were likewise rated highest in the technical and procedural objectives with ETS and ICSP each receiving a median swing rank of 1 and 2 respectively. Much like the non-expert participants, experts overwhelmingly preferred strong technical and procedural controls at the onset of the value forum with the median swing weights being heavily in favor of ETS and ICSP.
Final importance rank, swing rank and swing weight data for experts
After the overall and individual scenario ranking and weighting was completed, expert participants were then asked to re-evaluate their prior objective ranking and weightings to determine whether after seeing potential real-world implementations of the objectives their perceptions towards them had changed (See Table 8). It was found that the overall importance rankings stayed relatively similar, with ICSP retaining median rankings of 2; however, POI rose in its median ranking to 2 from 3 and ETS dropped from 1 to 2, while DSVS and DIPC stayed the same with a median ranking of 4 for each. Swing weights for each objective changed the most dramatically as it appears that seeing proposed instances of implementation altered expert participants' perception of the objectives. Swing rankings did experience some change with ETS moving from 1 to 2, DSVS dropping from 4 to 5 and DIPC moving up from 4 to 3. The most interesting aspect about these changes is that the weights themselves showed a relatively consistent swing weight for ICSP (90–92), yet ETS dropped dramatically (100–85) and POI improved dramatically (70–80). Interestingly this nearly exact same shift is seen in the non-expert results between their initial and final ratings as well. This may potentially demonstrate the importance of ensuring adequate participant understanding of the decision as an increase in understanding in both the non-expert and expert groups resulted in similar shifts in the initial and final outcomes.
Scenario selection preference for experts
In this phase, expert participants evaluated the holistic scenario “options” first, ranking them based on their preference from “good” to “bad” options, with 2 being their most preferred and 5 being their least preferred overall scenario for the objective's implementation. Lastly, the rankings were weighted relative to how “good” or “bad” experts felt they were compared to the baseline good and bad scenarios that were provided.
The study found (see Table 9) that the expert participants heavily favored scenario C with a median rank of 2 and a mean weight of 93.6, demonstrating that they found an option where the technical tools and procedures exist and can be turned on or off at the preference of the user and that some level of regulation is preferred in order to ensure the adequacy of these prevention methods. By contrast, scenario D was the least preferred of all the scenarios receiving a median rank of 5 and a mean weight of 45.19 for individuals. Experts found Scenario A to be the least appealing with a rank of 5 and mean weight of 37.2. Experts had a similarly poor rating for Scenario D, rank 3 and weight 52, indicating they disapprove of a near hands-off approach, yet a heavy-handed government approach is less so appealing. This is very important as it demonstrates that users, expert and non-expert alike, clearly want mechanisms in place that work to prevent cyberstalking, while preferring to maintain a level of control and discretion over the exact use and implementation of those prevention mechanisms as opposed to having mechanisms forced upon them. It is important to note that the mean rank of Scenario D was 3.6 and the mean rank of B was 3.8, hence the swing weights accurately reflect scaling of the participants selection and demonstrate how close Scenario D and B were amongst experts in terms of preference.
Individual scenario selection by objective for experts
Finally, expert participants ranked each scenario in order of preference by the objective. The results (see Table 10) indicate that the experts substantially prefer Scenario C. Again, of note, Scenario B had a higher mean rank, yet lower median rank, than Scenario D and mean weights reflect this. Similar to the non-expert participants, experts placed more weight on Scenario B as a secondary preference; meaning Scenario C and B are the most preferred individual scenario implementations for experts as well. However, this is different from non-experts who, when given the ability to select by scenario instead of holistically, tended to show greater preference for Scenario B than C.
Value forum utility function results
In the final stage of the PVF the utility functions are calculated for each fundamental objective for each of the 4 scenarios provided in the study. The function is calculated individually for the non-experts and experts as well as the overall utility for the combined groups, providing a model that enables the assessment of the potential efficacy of a policy derived from these potential scenarios. The importance of this process is two-fold (Smith et al., 2021a) as it both facilitates an analysis of scenario utility within each objective, determining the preferences of each group as well as overall preference, and also it enables an organization or institution to build a policy that addresses each fundamental objective (collectively or individually) and measure the utility of such solutions by a given implementation scenario.
To begin this process of calculating group utility, individual utility functions are calculated for each participant in both groups (Smith et al., 2021a). These individual utility functions for each participant were then weighted and summated (see formula pg. 4), providing the overall utility for each group. The scaled weight for each participant in each group was the same, as whether a participant is considered an “expert” or a “non-expert” on the subject matter, they are all equal members of the general public and should not be given any greater weight in the overall solution. It is important to note that the benefit of having both the “expert” and “non-expert” groups allowed us to analyze if there were any significant differences in values that may necessitate the use of a different weight scale as well as for overall comparative purposes. While differences do exist between the groups, the results show similar trends amongst the groups with similar overall preferences. Also of note, smaller sample sizes such as that of the expert group (n = 5), are more susceptible to expressions of significant differences in rank and weight between participants (Smith et al., 2021a).
The utilities for each objective (see Table 11), demonstrate the same trends observed in the previous sections results. Obvious preference for Scenario B and C exists for both groups. However, experts appear to strongly prefer Scenario C while non-experts lean more towards Scenario B. These preferences are reflected in the utilities for each group and clearly seen in the final overall combined utilities. Further, in the non-expert group, a solution similar to Scenario A would receive a higher utility than C in 4 of the 5 objectives; however, those differences are all less than 1 point. Similarly, a solution similar to Scenario D is preferred more than Scenario B in 3 out of 5 objectives for the expert group. This is impactful as it shows that Scenario C and B maintain overall consistency between groups and by objective in overall utility, yet Scenario A and D being extreme opposites are highly variable. This suggests approaches with solutions that follow a path of moderation in the application of these objectives are likely to be more appealing.
The demonstrated utility calculations are useful to an organization or governmental institution as they can aid in determining the potential efficacy of a cyberstalking prevention policy. For example (see Table 11), an organization may address the problem of cyberstalking by working to address the fundamental objectives ETS and Increase Cyberstalking Security Procedures and giving minimal effort or resources to the remaining objectives. As they only have the ability to implement policy that addresses these concerns in a manner similar to Scenario C, the remaining objectives are treated in a manner similar to Scenario D. The organization would calculate the overall policy utility by adding ETS and ICSP scenario C utility of 18.6 and 17.4, and the remaining objectives scenario D utility, 8.7, 6.1 and 6.4 respectively. This would result in the policy having an overall scaled utility of 57.2 out of a possible 100 (representing the maximum possible potential benefit).
Discussion
Based on the results of this study, three distinct conclusions are drawn utilizing social choice theory for interpretation and contextualization; First, baseline regulations must exist in order to aid organizations in the prevention of cyberstalking and provide users with the confidence that the issue is being addressed. Second, users desire technical controls in place that can be used to protect them and their information from potential cyberstalking. Third, users do tend to want the freedom to choose to what extent the regulations and controls for the prevention of cyberstalking should be implemented. By combining the individual preferences of each participant in the public value forum, holistic group-level interpretations of the results become possible with social choice theory providing explanatory power over group-level decisions. These findings represent the societal preferences for preventing cyberstalking with the overall intention to enhance its member's social well-being.
Therefore, we make two significant contributions to the academic literature; First, we provide a normative framing mechanism for operationalizing Keeney's (1988) public value forum in a theoretically contextualized manner, enhancing the quality of knowledge produced through the use of this methodological approach. Second, we extend the use of this process to incorporate ethics, utilizing ethically constrained fundamental objectives to guide the research process and the implications of doing so. Thus, in the following sections, we detail the results and findings of the study, the theoretical implications, ethical implications and the limitations of our research as well as potential future research directions.
Results discussion
The results demonstrate a strong desire by participants to have clear regulations, policies and procedures that elucidate required protections against cyberstalking. For example, scenario D that provided little or no governance to this issue, regardless of objective, was the least preferred by virtually all non-expert participants, as it received no top rankings and 12 last place rankings. Scenario A, requiring heavy governmental involvement, was least preferred by all expert participants and received no first place rankings. This showed that both experts and non-expert participants were wary of solutions that incorporated either too much or too little involvement by regulatory or legislative bodies. Participants additionally demonstrated the need for strong technical controls by ranking ETS highly in the objective ratings as well as through the selection of scenarios, indicating a high degree of technical tools should be available to users for protection from cyberstalking. In the application of policy and technical controls, participants demonstrated clear preferences for control over final implementation and enforcement, which would likely aid in the successful implementation of policy. At the holistic scenario level, Scenario C was the clear preference in the value forum for both groups, which was composed of individual instantiations of the five objectives for preventing cyberstalking. These objectives have characteristics such as some general government mandated regulations about policy and technical controls, a bevy of technical control options available to the user and the ability of users to “opt-out” from these controls if they felt it added undue burden or restriction. This is important to note in the context of the overall user scenario selection. It can be said that even if something is “good for you,” if it is forced upon the user, they may reject it regardless of the risk, for which there is some support in the literature for this assertion.
As Herley (2009) noted in the context of security and usability “users reject advice since it offers to shield them from the direct costs of attacks, but burdens them with increased indirect costs, or externalities. Since the direct costs are generally small relative to the indirect ones they reject this bargain. Since victimization is rare, and imposes a one-time cost, while security advice applies to everyone and is an ongoing cost, the burden ends up being larger than that caused by the ill it addresses.” Having some policy to enforce protocols and technical controls in place, while also giving the user freedom in choosing the level of restriction proved popular even at the individual scenario ranking level where scenario B options tended to be the most preferred with C a close second overall. Scenario B tended to provide similar options to C, the difference was that when it comes to procedures and technical controls. Based on these results, users clearly feel it is better to have procedures and technical controls clearly defined. The difference between the non-expert and expert utility preferences could lie in the assertion that non-experts are looking for those they deem as “experts” to help guide them in framing protection mechanisms against cyberstalking. This is in contrast to the experts who instead appear to defer to the non-experts, allowing them to select from a range of protections that best fit the individual. The utility calculations for both groups and the overall combination of the two groups also support this possibility.
Theoretical implications: better information for better decision-making
In order for society to solve a problem as complex as cyberstalking, it must be able to understand the interplay between the information itself and the decision-making framework. It has been long understood within decision-making related research that there is no simple relationship between “more information” and “better decisions (Sarewitz and Pielke, 2007).” This is to say, that if the only implication of the given research is to simply add more information in order to provide a greater understanding of the subject matter, we are unlikely to solve the problem itself or inform the decision-maker of their potential shortcomings within the decision-making process (Sarewitz and Pielke, 2007). The academic literature provides several supportive justifications for this assertion as follows; the information lacks relevance to the decision-maker; lacking appropriateness for the given context; it lacks reliability; it provides conflicting values or interests; it is unknown at the time; or it is not sufficiently communicated in the process (Sarewitz and Pielke, 2007). Further yet, the individuals who stand to benefit by or be most adversely affected by the outcome of this decision process, have a greater stake in the outcome of such decisions and processes (Sarewitz and Pielke, 2007; Smith et al., 2021a). The public value forum methodology and social choice theory address several of these concerns, being that they are highly relevant to the selection of our theory and methodology. For Smith et al. (2021a), these reasons include; relevance, appropriateness to decision context and involving the most affected stakeholders in the decision process. Keeney's (1988) public value forum, when contextualized with a theoretical framework such as Sen's (1977, 1999) social choice theory, with a focus on social equality of choice, we find that all of these aforementioned justifications are addressed by this research. Therefore, the implications of this study go beyond the simple addition of “more information” and instead provide rich contextual information that informs the decision-making process by being; relevant, appropriate, reliable, consistent, known and clearly communicated.
This study makes further contributions towards solving the problem of cyberstalking. The results of this research provide the necessary information to guide and make appropriate decisions, which are known to be strongly influenced by a number of factors. Such factors that can influence the decision-making process include institutional structures, prior practice, political stakes and distributions of power (Sarewitz and Pielke, 2007; Smith et al., 2021a). As all of these factors can be found in all the affected stakeholders for a given context, they can highly influence the type of information decision-makers require to solve a problem, such as that of cyberstalking (Sarewitz and Pielke, 2007; Smith et al., 2021a). To ensure the incorporation of these complex factors within the decision-making process in this research, it necessitates the inclusion of both experts and non-experts. In doing so, our study firmly grounds any decisions based on this process in the values and interests of the vested stakeholders, motivating them to enhance societal well-being through the inculcation of their individual preferences within the process itself (See Figure 4). Using their individual constraints to frame their elicited preferences, we are able to extrapolate them to a useful and inclusive set of holistic group preferences, representative of Sen's (1977, 1999) conception of social equality of choice. The individual constraints inform stakeholder preferences, which then provide the necessary information to develop individual models that are then used to develop a holistic group model that can inform the creation of public policy. This public policy, and therefore group model, is thus representative of the constraints, preferences and individual models of the affected stakeholders.
Therefore, as researchers who seek to understand the behavior of scientific information in complex decision-making contexts, we must recognize that the utility of such information depends on the dynamics of the decision context and its broader social setting (Sarewitz and Pielke, 2007). We can understand this to mean that knowledge for the sake of knowledge does not necessarily provide any utility in solving a particular problem and hence another important contribution of this research is that it provides utility that can be applied by the decision-maker. To support our claim, Gibbons (1999) describes the conversion from the gold standard of “reliable knowledge,” self-determined by scientists, to “socially robust knowledge; (p. C81)” This distinction is critical as socially robust knowledge is the express goal of a public value forum as well as that of social choice theory. To meet such a goal, as per Gibbons (1999), it must satisfy three important criteria, “First, it is “valid not only inside but also outside the laboratory. Second this validity is achieved through involving an extended group of stakeholders, including lay “experts”. And third, because “society” has participated in its genesis, such knowledge is less likely to be contested than that which is merely reliable (p. C82).” To this end, social choice theory and the public value forum incorporate these three dimensions of the decision-making process in order to transform “reliable knowledge” into that of useful “socially robust knowledge” which can be used by decision-makers in the role of setting public policy to solve the problem of cyberstalking.
Ethical implications
Another dimension of public policy that should influence the decision-making process regarding cyberstalking is that of ethics. When discussing the role of ethics in public policy, Goodin (2017) states the following; “Ethics constrains us, but ethics can also act as an “enabler”, helping to secure compliance with public policies. Basing policies on ethical principles helps the public know what is required of them by public policies. Framing policies in those ways also primes people to think in terms of their own ethically based reasons for action. Basing policies on ethical principles can assist in securing the cooperation of potential veto players by creating cooperative norms and a culture of trust (p. 1).” It is important to note that the fundamental objectives in this study were normatively constrained in such a manner and explicitly intended for use in motivating public policy development in an ethical manner (Dhillon and Smith, 2019). In order for such public policy to motivate society in an ethical manner, it must come from an ethically derived origin and thus become instantiated in the form of public policy in a similar such ethical manner. Then, through the enforcement of compliance to such a standard we may find both an ethical code of conduct instantiated in the form of public policy intended to deal with cyberstalking and expand the domains in which ethics are incorporated to inform such public policy (Goodin, 2017).
Limitations and future research directions
Keeney's (1988) public value forum is not without some limitations. The single greatest disadvantage of the public value forum according to Keeney (1988) is, “the time and cost of the forum (p. 1029).” The public value forum process requires a large amount of time and cost to set up, manage and complete, making it difficult to repeat. As a byproduct of this time consuming and costly nature, Keeney (1988) also notes disadvantages related to, “the small sample (size) and the lack of representativeness (p. 1029).” While we have taken reasonable measures to ensure a representative cross-section in our sample and included an “expert forum” in our study for comparative purposes, it was beyond the scope of this research to repeat the forum more than once. However, Keeney (1988) does note that the best use of this technique is to target the sample selection on key stakeholders and leaders involved in the specific policy debate, as was done in this study.
In the future, there are numerous opportunities for research contributions based on this study. Importantly, the Public Value forum process can be repeated, with a new set of stakeholders to compare and contrast results. Given that personal preferences can change over time, the preferences of key stakeholders can vary as they respond to changes in public policy. Further, this process can be repeated in a non-United States setting, exploring how other countries feel about the issue of cyberstalking and the use of public policy interventions to deal with the issue of cyberstalking. Lastly, an experimental study can be designed to test the reactions of key stakeholders to the implementation of varied forms of public policy response.
Conclusion
The results of this research bring forth new knowledge in the relatively unexplored area of cyberstalking within the information systems field. Through this investigation, using methods such as the public value forum and multi-attribute utility modeling that are informed theoretically by social choice theory, it has been revealed which objectives and scenarios the general public find most important and provide the greatest perceived deterrent to cyberstalking. This knowledge is essential for developing measures and protections against cyberstalking at a policy level by governments and organizations that take into account the values and preferences of those affected by such a policy. Therefore, this is a significant contribution as previous research in this area is under-developed and as such falls short of being able to direct the proposal and generation of tangible measures and protections against cyberstalking. Results clearly indicate a strong preference by the general public for technical and procedural controls aimed at the prevention of cyberstalking at a policy level, while still leaving final control over the exact implementation to the people themselves. This research extends the process further and provides multi-attribute utilities that incorporate the values of cyberstalking experts and non-experts in order to provide a flexible model for the creation of cyberstalking prevention policy at a governmental and organizational level that maximizes policy value.
Figures
Fundamental objectives for preventing cyberstalking from Dhillon and Smith (2019)
Protect online interaction | Increase cyberstalking security procedures |
Exercise caution when meeting online | Ensure online browsing security |
Minimize use of public forums | Increase authentication measures |
Ensure protection mechanisms in online forums | Ensure availability of cyberstalking prevention tools |
Remove harmful online chat rooms | Ensure website trustworthiness |
Ensure Technical Security | Develop Strong Values System |
Invest in safe browsing technologies | Ensure strong family values to prevent cyberstalking |
Increase use of tools to prevent stealing of information | Increase social pressures to reduce cyberstalking |
Manage login credentials effectively | Increase family support in ensuring information protection measures |
Monitor security settings online | |
Create online filters to block negative behavior | |
Define intermediaries to prevent cyberstalking | |
Develop payment systems to ensure security | |
Create pay services to protect consumer online information | |
Increase use of personal information insurance to protect privacy | |
Develop trust forming mechanisms to protect against cyberstalking |
Example of objective ranking and weighting
Objective importance swing % (0–100) | Objective importance swing rank (1–5) | Objective importance rank (1–5) | Cyberstalking fundamental objectives |
---|---|---|---|
90 | 2 | 1 | Protect online interaction |
80 | 3 | 2 | Increase cyberstalking security procedures |
100 | 1 | 3 | Ensure technical security |
0 | 5 | 5 | Develop strong values system |
60 | 4 | 4 | Define intermediaries to Prevent cyberstalking |
Non-expert initial ranking and weighting results
Objective | Median of importance rank | Median rank of swing weights | Mean of swing weights |
---|---|---|---|
Protect online interaction | 3 | 4 | 56.95 |
Increase cyberstalking security procedures | 2 | 2 | 74.33 |
Ensure technical security | 2 | 2 | 83.19 |
Develop strong values system | 4 | 4 | 40.62 |
Define intermediaries to prevent cyberstalking | 4 | 3 | 60.05 |
Non-expert final ranking and weighting results
Objective | Median of importance rank | Mean of swing weights | Median rank of swing weights |
---|---|---|---|
Protect online interaction | 2 | 72.67 | 3 |
Increase cyberstalking security procedures | 2 | 76.91 | 2 |
Ensure technical security | 2 | 80 | 2 |
Develop strong values system | 4 | 40.24 | 5 |
Define intermediaries to prevent cyberstalking | 4 | 52.19 | 3 |
Non-expert overall scenario Ranking and weighting results
Scenario | Median rank | Mean weight |
---|---|---|
A | 3 | 59.76 |
B | 3 | 65.80 |
C | 2 | 78.19 |
D | 5 | 45.19 |
Good | 1 | 100 |
Bad | 6 | 0 |
Non-expert individual scenario ranking and weighting results
Scenarios | POI | Mean weight (0–100) | ICSP | Mean weight (0–100) | ETS | Mean weight (0–100) | DSVS | Mean weight (0–100) | DIPC | Mean weight (0–100) |
---|---|---|---|---|---|---|---|---|---|---|
Median rank (1–6) | Median rank (1–6) | Median rank (1–6) | Median rank (1–6) | Median rank (1–6) | ||||||
Good Scenario | 1 | 100 | 1 | 100 | 1 | 100 | 1 | 100 | 1 | 100 |
Scenario A | 4 | 67.05 | 3 | 70.57 | 3 | 74.90 | 4 | 63.24 | 3 | 71.38 |
Scenario B | 3 | 71.90 | 3 | 77.29 | 3 | 75.52 | 2 | 75.38 | 3 | 72.43 |
Scenario C | 3 | 77.67 | 3 | 70.62 | 3 | 70.48 | 4 | 66.67 | 4 | 67 |
Scenario D | 5 | 37.52 | 5 | 40.71 | 5 | 43.62 | 5 | 43.86 | 5 | 46.24 |
Bad Scenario | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 0 |
Expert initial ranking and weighting results
Objective | Median of Importance rank | Median of Swing weights | Median rank of Swing weights |
---|---|---|---|
Protect online interaction | 3 | 70 | 3 |
Increase cyberstalking security procedures | 2 | 90 | 2 |
Ensure technical security | 1 | 100 | 1 |
Develop strong values system | 4 | 60 | 4 |
Define intermediaries to prevent cyberstalking | 4 | 40 | 4 |
Expert final ranking and weighting results
Objective | Median of Importance rank | Median of Swing weights | Median rank of Swing weights |
---|---|---|---|
Protect online interaction | 2 | 80 | 3 |
Increase cyberstalking security procedures | 2 | 92 | 2 |
Ensure technical security | 2 | 85 | 2 |
Develop strong values system | 4 | 0 | 5 |
Define intermediaries to prevent cyberstalking | 4 | 60 | 3 |
Expert overall scenario ranking and weighting results
Scenario | Median rank | Mean weight |
---|---|---|
A | 5 | 37.2 |
B | 4 | 58 |
C | 2 | 93.6 |
D | 3 | 52 |
Good | 1 | 100 |
Bad | 6 | 0 |
Expert individual scenario ranking and weighting results
Scenarios | POI | Mean weight (0–100) | ICSP | Mean weight (0–100) | ETS | Mean weight (0–100) | DSVS | Mean weight (0–100) | DIPC | Mean weight (0–100) |
---|---|---|---|---|---|---|---|---|---|---|
Median rank (1–6) | Median rank (1–6) | Median rank (1–6) | Median rank (1–6) | Median rank (1–6) | ||||||
Good Scenario | 1 | 100 | 1 | 100 | 1 | 100 | 1 | 100 | 1 | 100 |
Scenario A | 5 | 38.2 | 5 | 26.4 | 5 | 28.2 | 5 | 26.4 | 5 | 40 |
Scenario B | 4 | 63 | 4 | 53 | 4 | 57 | 4 | 53 | 4 | 60 |
Scenario C | 2 | 81.8 | 2 | 79.8 | 2 | 80.8 | 2 | 76.8 | 2 | 81.8 |
Scenario D | 3 | 57 | 3 | 53 | 3 | 53 | 3 | 51 | 3 | 53 |
Bad Scenario | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 0 |
Non-expert, expert and overall utility function results
Protect Online Interaction | Experts utility | Overall utility | |
---|---|---|---|
Non-experts utility | |||
Scenario A | 15.9 | 5.1 | 13.8 |
Scenario B | 17.2 | 9.5 | 15.7 |
Scenario C | 17.6 | 13 | 16.8 |
Scenario D | 8.3 | 10.3 | 8.7 |
Increase cyberstalking security procedures | Experts utility | Overall utility | |
---|---|---|---|
Non-experts utility | |||
Scenario A | 17.1 | 6.5 | 15.1 |
Scenario B | 18.5 | 13.4 | 17.5 |
Scenario C | 16.7 | 20.5 | 17.4 |
Scenario D | 10.4 | 13.5 | 11 |
Ensure technical procedures | Experts utility | Overall utility | |
---|---|---|---|
Non-experts utility | |||
Scenario A | 18.6 | 7.3 | 16.3 |
Scenario B | 18.8 | 14.3 | 17.8 |
Scenario C | 18.3 | 20.4 | 18.6 |
Scenario D | 11.3 | 14.9 | 11.9 |
Develop strong values system | Experts utility | Overall utility | |
---|---|---|---|
Non-experts utility | |||
Scenario A | 10.2 | 2.7 | 6.9 |
Scenario B | 12 | 10.6 | 9.7 |
Scenario C | 9.4 | 17.2 | 9.5 |
Scenario D | 7.1 | 7.9 | 6.1 |
Define intermediaries to prevent cyberstalking | Experts utility | Overall utility | |
---|---|---|---|
Non-experts utility | |||
Scenario A | 11 | 7.7 | 10.3 |
Scenario B | 11.5 | 8.4 | 10.9 |
Scenario C | 10.5 | 9.4 | 10.3 |
Scenario D | 6.4 | 6.4 | 6.4 |
Fundamental Objectives Table
Instrument 1
Swing weighting rank (0–100) | Objective Swing rank (1–5) | Objective Importance weight (0–100) | Objective Importance rank (1–5) | Cyberstalking fundamental Objectives | Good scenario | Scenario A | Scenario B | Scenario C | Scenario D | Bad scenario |
---|---|---|---|---|---|---|---|---|---|---|
Protect online interaction |
|
|
|
|
|
| ||||
Increase cyberstalking security procedures |
|
|
|
|
|
| ||||
Ensure technical security |
|
|
|
|
|
| ||||
Develop strong values system |
|
|
|
|
|
| ||||
Define intermediaries to minimize cyberstalking |
|
|
|
|
|
| ||||
Scenario rank (1–6) | 1 | 6 | ||||||||
Scenario importance % (0–100) | 100 | 0 | ||||||||
Scenario swing weight | 100 | 0 |
Instrument 2
Cyberstalking fundamental Objectives | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Scenarios | Protect online interaction | Rank (1–6) | Rank Importance weight (0–100) | Swing weight (0–100) | Increase cyberstalking security procedures | Rank (1–6) | Rank Importance weight (0–100) | Swing weight (0–100) | Ensure technical security | Rank (1–6) | Rank Importance weight (0–100) | Swing weight (0–100) | Rank (1–6) | Rank Importance weight (0–100) | Define intermediaries to minimize cyberstalking | Rank (1–6) | Rank Importance weight (0–100) | Swing weight (0–100) |
Good Scenario |
| 1 | 100 | x |
| 1 | 100 | x |
| 1 | 100 | 1 | x |
| 1 | 100 | x | |
Scenario A |
|
|
|
| ||||||||||||||
Scenario B |
|
|
|
| ||||||||||||||
Scenario C |
|
|
|
| ||||||||||||||
Scenario D |
|
|
|
| ||||||||||||||
Bad Scenario |
| 6 | 0 | x |
| 6 | 0 | x |
| 6 | 0 | x | 6 | 0 |
| 6 | 0 | x |
Objective Rank (1–5) | ||||||||||||||||||
Objective Swing Rank (1–5) | ||||||||||||||||||
Objective Weighting (0–100) | ||||||||||||||||||
Objective Swing Rank Weighting (0–100) |
Conflict of interest: Author Gurpreet Dhillon declares that he has no conflict of interest. Author Kane J. Smith declares that he has no conflict of interest.
Instructions for Instrument 1
Participants Review Objective meanings and then review “Good” and “Bad” Scenarios to understand context as well as what is considered ideal (best possible scenario) and what is not (ie. worst possible scenario).
Based on Meanings of Objectives and good/Bad situations participants are asked to rank the objectives in order of importance with 1 being the most important and 5 being the least important. They are also asked to give a weight to each, 100 for 1 and 0 for 5. Each rank decreases the weight relative to 100 (ex. 1 = 100, 2 = 90, 3 = 50 etc.). The idea is to find how how important each one is relative to the other in terms of acheiving the decision context.
Next, they assign a “swing rank” where they review the bad scenario and the good scenario and decide how big and important the “swing” from bad to good is for each objective (Meaning the change will most affect the prevention of cyberstalking within that objective). The objective whose scenario swings the most from bad to good (ie. the change is the most dramatic) is given a 1 and the least is given a 5. “Swing weight ranks” are also given a % rating, with the 1 rank getting 100% and 5 getting 0%. All others get a value in between that is less than the previous (ie. 1 = 100, 2 = 80, 3 = 60, 4 = 20, 5 = 0).
Lastly, Participants return to the scenarios to rank them in order of desirability. The “good” scenario is ranked 1 and “bad” is ranked 6. Participants are to select the next most desirable and assign it a 2 and so on until all scenarios have been assigned a value. Then participants assign a % to each scenario as previously done to objectives were 1 gets 100 and 6 gets 0. All others get a value in between that is less than the previous (ie. 1 = 100, 2 = 80, 3 = 60, 4 = 20, 5 = 10 and 6 = 0). The final step is to assign each scenario a swing weight based on the assumption that the “good” scenario is 100 and the “bad” scenario is 0. Participants rate each scenario's swing from the “good” scenario. Example: Good = 100, rank 2 = 90% as good as “good scenario,” rank 3 = 70% as good as “good scenario,” and so on.
Instructions for Instrument 2
Works similar to instrument 1, except that they are ranking and weighting each objectives scenario separately.
Each participant reviews the scenario and begins by ranking each scenario per objective with good getting a 1 and bad getting a 6. They then fill in the remaining ranks in order of desirability for each scenario per objective. They do the same for weighting of each scenario, were 1 gets 100 and 6 gets 0. All others get a value in between that is less than the previous (ie. 1 = 100, 2 = 80, 3 = 60, 4 = 20, 5 = 10 and 6 = 0).
Lastly, the objectives are re-ranked, weighted and swing-weighted as they did previously with instrument 1 to determine if there has been any changes in opinion.
References
Akkermans, H. and Van Helden, K. (2002), “Vicious and virtuous cycles in ERP implementation: a case study of interrelations between critical success factors”, European Journal of Information Systems, Vol. 11 No. 1, pp. 35-46.
Alexy, E.M., Burgess, A.W., Baker, T. and Smoyak, S.A. (2005), “Perceptions of cyberstalking among college students”, Brief Treatment and Crisis Intervention, Vol. 5 No. 3, p. 279.
Burke Winkelman, S., Oomen-Early, J., Walker, A.D., Chu, L. and Yick-Flanagan, A. (2015), “Exploring cyber harassment among women who use social media”, Universal Journal of Public Health, Vol. 3 No. 5, p. 194.
Castaños, J. (2016), “Updating antiquated legal analysis: re-evaluating the need to prosecute cyberstalkers”, Brigham Young University Prelaw Review, Vol. 30 No. 1, pp. 1-14.
Chung, C. (2017), “An old crime in a new context: Maryland's need for a comprehensive cyberstalking statute”, University of Maryland Law Journal on Race, Religion, Gender and Class, Vol. 17, p. 117.
Coss, D.L., Smith, K., Foster, J. and Dhillon, S. (2019), “Big data in auditing: a value-focused approach to cybersecurity management”, Journal of Information System Security, Vol. 15 No. 2, pp. 77-100.
Cupach, W. and Spitzberg, B. (2001), “Obsessive relational intrusion: incidence, perceived severity, and coping”, Violence and Victims, Vol. 15 No. 1, pp. 1-16.
DeMatteo, D., Wagage, S. and Fairfax-Columbo, J. (2017), “Cyberstalking: are we on the same (web) page? A comparison of statutes, case law, and public perception”, Journal of Aggression, Conflict and Peace Research, Vol. 9 No. 2, pp. 83-94.
Dhillon, G. and Smith, K.J. (2019), “Defining objectives for preventing cyberstalking”, Journal of Business Ethics, Vol. 157 No. 1, pp. 137-158.
Dhillon, G. and Torkzadeh, G. (2006), “Value-focused assessment of information system security in organizations”, Information Systems Journal, Vol. 16 No. 3, pp. 293-314.
Dhillon, G., Oliveira, T., Susarapu, S. and Caldeira, M. (2016), “Deciding between information security and usability: developing value based objectives”, Computers in Human Behavior, Vol. 61, pp. 656-666.
Gibbons, M. (1999), “Science's new social contract with society”, Nature, Vol. 402 No. Supplemental, pp. C81-C84.
Goodin, R.E. (2017), “Ethics as an enabler of public policy”, The Political Quarterly, Vol. 88 No. 2, pp. 273-279.
Goodno, N.H. (2007), “Cyberstalking, a new crime: evaluating the effectiveness of current state and federal laws”, Missouri Law Review, Vol. 72 No. 1, pp. 125-198.
Green, F. (2021), “Sextortion, cyber-crimes and cyberstalking are on the rise”, available at: https://www.usnews.com/news/best-states/virginia/articles/2021-04-24/sextortion-cyber-crimes-and-cyberstalking-are-on-the-rise (accessed 22 October 2021).
Hazelwood, S. and Koon-Magnin, S. (2013), “Cyber stalking and cyber harassment legislation in the United States: a qualitative analysis”, International Journal of Cyber Criminology, Vol. 7 No. 2, pp. 155-168.
Herley, C. (2009), “So long, and no thanks for the externalities: the rational rejection of security advice by users”, Proceedings of the 2009 Workshop on New Security Paradigms Workshop, ACM, Oxford, pp. 133-144.
Kaur, P., Dhir, A., Tandon, A., Alzeiby, E.A. and Abohassan, A.A. (2021), “A systematic literature review on cyberstalking. An analysis of past achievements and future promises”, Technological Forecasting and Social Change, Vol. 163, pp. 1-15.
Keeney, R.L. (1988), “Structuring objectives for problems of public interest”, Operations Research, Vol. 36 No. 3, pp. 396-405.
Keeney, R.L. (1996), “Value-focused thinking: identifying decision opportunities and creating alternatives”, European Journal of Operational Research, Vol. 92 No. 3, pp. 537-549.
Keeney, R.L. (2013), “Foundations for group decision analysis”, Decision Analysis, Vol. 10 No. 2, pp. 103-120.
Keeney, R.L. and Palley, A.B. (2013), “Decision strategies to reduce teenage and young adult deaths in the United States”, Risk Analysis, Vol. 33 No. 9, pp. 1661-1676.
Keeney, R.L., Winterfeldt, D.V. and Eppel, T. (1990), “Eliciting public values for complex policy decisions”, Management Science, Vol. 36 No. 9, pp. 1011-1030.
Kelley, K., Clark, B., Brown, V. and Sitzia, J. (2003), “Good practice in the conduct and reporting of survey research”, International Journal for Quality in Health Care, Vol. 15 No. 3, pp. 261-266.
Knight, M.A. (2014), “Stalking and cyberstalking in the United States and rural South Dakota: twenty-four years after the first legislation”, SDL Review, Vol. 59, p. 392.
Leahy, G. (2017), “What's the difference between private and public on social media? A push for clearer language in the Illinois cyberstalking statute”, DePaul Law Review, Vol. 66 No. 3, pp. 1-5.
Marshak, E. (2017), “Online harassment: a legislative solution”, Harvard Journal on Legislation, Vol. 54, p. 503.
Marshall, M.N. (1996), “Sampling for qualitative research”, Family Practice, Vol. 13 No. 6, pp. 522-526.
May, J., Dhillon, G. and Caldeira, M. (2013), “Defining value-based objectives for ERP systems planning”, Decision Support Systems, Vol. 55 No. 1, pp. 98-109.
Nissenbaum, H. (2004), “Privacy as contextual integrity”, Washington Legal Review, Vol. 79, p. 119.
Sarewitz, D. and Pielke, R.A. (2007), “The neglected heart of science policy: reconciling supply of and demand for science”, Environmental Science and Policy, Vol. 10 No. 1, pp. 5-16.
Sen, A. (1977), “Social choice theory: a re-examination”, Econometrica: Journal of the Econometric Society, Vol. 45 No. 1, pp. 53-89.
Sen, A. (1999), “The possibility of social choice”, American Economic Review, Vol. 89 No. 3, pp. 349-378.
Smith, K.J. and Dhillon, G. (2020), “Assessing blockchain potential for improving the cybersecurity of financial transactions”, Managerial Finance, Vol. 46 No. 6, pp. 833-848.
Smith, K.J., Dhillon, G. and Hedström, K. (2018), “Reconciling value-based objectives for security and identity management”, Information and Computer Security, Vol. 26 No. 2, pp. 194-212.
Smith, K.J., Dhillon, G. and Carter, L. (2021a), “User values and the development of a cybersecurity public policy for the IoT”, International Journal of Information Management, Vol. 56, pp. 1-15.
Smith, K.J., Dhillon, G. and Otoo, B.A. (2021b), “iGen user (over) attachment to social media: reframing the policy intervention conversation”, Information Systems Frontiers, pp. 1-18, doi: 10.1007/s10796-021-10224-7.
Spitzberg, B. and Rhea, J. (1999), “Obsessive relational intrusion and sexual coercion victimization”, Journal of Interpersonal Violence, Vol. 14 No. 1, pp. 3-20.
Spitzberg, B., Nicastro, A. and Cousins, A. (1998), “Exploring the interactional phenomenon of stalking and obsessive relational intrusion”, Communication Reports, Vol. 11 No. 1, pp. 33-48.
Todd, C., Bryce, J. and Franqueira, V.N. (2021), “Technology, cyberstalking and domestic homicide: informing prevention and response strategies”, Policing and Society, Vol. 31 No. 1, pp. 82-99.
Truman, J.L. and Morgan, R.E. (2021), “Stalking victimization 2016”, Bureau of Justice Statistics, available at: https://bjs.ojp.gov/content/pub/pdf/sv16.pdf (accessed 16 January 2022).
Tversky, A. and Kahneman, D. (1986), “Rational choice and the framing of decisions”, Journal of Business, Vol. 54 No. 4, pp. S251-S278.
United States Attorney General (1999), “Cyberstalking: a new challenge for law enforcement and industry”, Report from the Attorney General to the Vice President.
Witesman, E.M. and Walters, L.C. (2014), “Modeling public decision preferences using context-specific value hierarchies”, The American Review of Public Administration, Vol. 45 No. 1, pp. 86-105.
Corresponding author
About the authors
Kane Smith is an Assistant Professor of Information Technology and Decision Sciences at The University of North Texas, USA. Research focus is on Information Security and Policy Development using analytical techniques to improve the security-related decision-making process of governments and organizations. Current areas of interest include; Cyberstalking, Blockchain Technology, IoT and Healthcare.
Gurpreet Dhillon, G. Brint Ryan Endowed Full Professor of Cybersecurity at The University of North Texas, USA and Guest Professor at ISEG, University of Lisbon, Portugal. Ph.D. in Information Systems from the London School of Economics and Political Science, England. He has published over 75 peer-reviewed journal articles in some of the top tier outlets, including in the FT50 journals. His security work has been used in large public and private sector organizations with some aspects used in drafting national security policies.