Abstract
Research in personal informatics (PI) calls for systems to support social forms of tracking, raising questions about how privacy can and should support intentionally sharing sensitive health information. We focus on the case of personal data related to the self-tracking of bipolar disorder (BD) in order to explore the ways in which disclosure activities intersect with other privacy experiences. While research in HCI often discusses privacy as a disclosure activity, this does not reflect the ways in which privacy can be passively experienced. In this paper we broaden conceptions of privacy by defining transparency experiences and contributing factors in contrast to disclosure activities and preferences. Next, we ground this theoretical move in empirical analysis of personal narratives shared by people managing BD. We discuss the resulting emergent model of transparency in terms of implications for the design of socially-enabled PI systems. CAUTION: This paper contains references to experiences of mental illness, including self-harm, depression, suicidal ideation, etc.
Keywords: privacy, personal informatics, serious mental illness, bipolar disorder
INTRODUCTION
Personal informatics (PI) and self-tracking systems help individuals collect, manage, reflect, and act on personal data [38]. While PI systems support an array of personal data and data management practices, many individuals use them exclusively for managing health and wellness. One important use case is the self-management of serious mental illness (SMI), such as bipolar disorder (BD).
BD is characterized by manic, hypomanic, depressive, and/or mixed episodes. People managing BD experience fluctuations in mood and energy levels that can be difficult to control. While self-managing BD often involves documenting behaviors such as sleep, exercise and social interaction on a daily basis, the characteristics of BD can make self-assessment difficult. Research has suggested that collective systems for tracking and interpreting behaviors can be beneficial [48]. To successfully support the management of BD, PI systems must move past the individual-centered focus of personal informatics into more collaborative models of engagement [48, 53] with formal and informal care networks [53]. However, sensitivities and stigmas around mental health generally [19, 48] and the affordances available in the experience of managing BD require privacy to be considered throughout the design process [48]. This paper examines the privacy experiences of people managing BD as part of a multi-phase research program to develop a collective PI system for people who manage BD [34, 48, 69].
Previously, we conducted a series of semi-structured interviews with individuals with BD to understand their lived experiences [48, 69]. During these interviews, we asked participants about their experiences tracking, assessing, and sharing their mental health. To understand the privacy experiences of participants during these interviews, we turned to prior work in privacy and (mental) health-related PI systems. A common model for privacy used in PI research is disclosure, or “the telling of the previously unknown so that it becomes shared knowledge” [31]. This is often operationalized through disclosure practices, preferences [15, 58, 60], and concerns [7, 42, 58], such as whether and how information should be shared in the context of family-centered health-tracking [58].
Though disclosure is an important component of privacy, focusing on the action of disclosure did not reflect the ways in which our participants passively experienced privacy. For instance, some participants described how they struggled with highly perceptible conditions of BD such as inconsistent social engagement or rapid speech. Further, centering our analysis on disclosure risked privileging the perspective that what is (in)visible to others is always the result of intentional choice. In the case of our participants, there were multiple factors influencing their ability to modulate the (in)visibility of their mental health status. These two points (i.e., that not all experiences of privacy are actions, and we cannot assume intention with regards to all revelations) are particularly challenging requirements for the design of PI systems to support SMI.
Other authors in HCI have also advocated for understanding privacy beyond disclosure [7, 22, 35, 50, 51, 54, 57]. Specifically, we draw from the framework of contextual integrity to understand our participants’ transparency experiences and contributing factors in contrast to disclosure practices, preferences, and concerns. Doing so enables us to better understand privacy in terms of specific contextual factors related to (in)visibility of identity. This helps move towards a more holistic model of collective and situated information sharing practices [7, 22, 54], and allows us to foreground and address the distinct experiences of vulnerable populations [40].
This paper addresses the following research questions:
RQ1: How do people managing BD experience the (in)visibility of mental illness?
RQ2: What factors modulate these (in)visibility experiences?
RQ3: What are the implications for collective PI system design for managing BD?
As we will explain throughout the paper, transparency is a state related to, though distinct from, the activity of disclosure. Transparency results from an assemblage of personal data in a particular context generated by both automated and human-driven mechanisms [13]. Thus, transparency experiences are the ways that people are made (in)visible within a particular context. This notion of transparency is contextually grounded in the lived experiences of our participants, including social and cultural attitudes towards mental illness. To understand what modulates these experiences of (in)visibility, we conducted an inductive thematic analysis of semi-structured interview transcripts to identify the influences that impact transparency experiences, which we call transparency factors. We identified five such factors: personal, audience, interface, externalization, and societal factors.
While prior research has investigated how PI systems can facilitate and support both self-driven [44] and collective mental health self-management practices [48, 69] in the context of managing BD, work has not yet 1) focused on privacy practices in PI systems using transparency as an interpretive framework, 2) explored what influences or motivates transparency experiences, or 3) disambiguated the privacy frameworks of disclosure and transparency in PI systems. Our research seeks to fill these gaps, broadening and complicating ideas of privacy to, in turn, inform design research.
RELATED WORK
The intersection of personal informatics, mental health, and privacy is a growing HCI focus [19, 52, 53]. We discuss the interdisciplinary literature that motivated our analytic perspective: PI systems for mental health management and privacy.
PI systems and mental health
PI platforms and self-tracking applications help individuals collect, manage, reflect, and act on personal data [38]. These systems track data such as physical activity, location, finances [26], internet history [35], social media [19, 62], and health-related data [46, 62]. Individuals use health-related PI systems for tasks such as finding support [19, 52], changing their behavior [62], or managing a chronic condition [46].
Though these PI systems are often developed for general users [27], they have become therapeutic resources for individuals managing serious mental illness (SMI) such as bipolar disorder (BD) [44, 45, 47]. Self-tracking—both clinically advised and self-directed—is central to the treatment of BD. Selftracking activities might include mood monitoring [32, 43] or paper-based diaries [20]. However, an individual-centered design perspective can miss opportunities to support the social practices that serve an equally important role in health [23, 25, 24] and health care [34, 48, 53, 69].
A common social practice of personal health informatics systems is to support health information sharing between patient and clinician [74]. While patient–clinician relationships are important, this focus can miss how informal relationships impact health care outcomes [19, 44, 48]. Concepts like “relational recovery” from the mental health domain acknowledge the role that social supports beyond the clinician–patient relationship play in coping with chronic illnesses [59]. It is against this backdrop that researchers have called for the design of collective PI systems [48]. These systems leverage models of sociotechnical relationships to support a range of collective sensemaking practices, including information sharing, comparing, and pooling [48], within peer networks [52] and families [58]. This paper is part of a broader program to understand how to design and develop a social PI system to support the management of BD [48, 69].
Health-related PI systems and privacy
The sensitivity of health data [11, 19] and stigmas surrounding mental health [48] require us to thoughtfully consider privacy throughout the design of PI systems to support the collective management of SMI like BD [22, 48]. We discuss prior research concerning privacy and sharing health information in technology-mediated social contexts to inform our analysis of participants’ experiences of privacy.
A common strategy to frame privacy in health-related PI systems is disclosure practices, preferences, and concerns. In the PI literature, this is frequently done in the context of social media, which aims to understand if, how, with whom, and why people choose to share or hide health-related PI data [3, 5, 14, 25, 46, 73]. People post personal health data on social media for a variety of reasons, including for accountability, emotional support, motivation, or getting advice [49, 75]. For example, people might use Instagram to track and share food data to support healthy eating behavior [14], or they might receive social support related to chronic disease management on Facebook, Twitter, or a condition-specific site [46, 64].
While people disclose sensitive health information to social media [4, 19, 55, 73], willingness to share personal health data is by no means the default stance due to perceived threats to reputation or privacy [11]. People are typically more comfortable making revelations about sensitive or stigmatized topics online when anonymous [56]. For instance, people with “throwaway” accounts are more likely to make mental health disclosures on reddit [19]. Researchers have synthesized these disclosure and privacy practices into design guidelines, recommendations, and frameworks for PI systems [17, 23, 24, 25, 53]. We draw from this work to understand the complex reasons that people share or hide sensitive or stigmatized health information in social PI systems.
Several authors have explored how people engaging in technologically-mediated disclosure decide to share or withhold information by performing a “privacy calculus” [21, 37] or mental computation to rationally weigh the benefits and risks of sharing content online, such as health information [36]. However, this process can be complicated by decision-making biases and heuristics such as loss aversion [1, 18] (e.g., people tend to avoid divulging personal details online when the potential risks are highlighted instead of the potential gains [1]). Usable privacy research identifies design opportunities that can “nudge” people towards more informed and beneficial privacy decisions [1, 71, 72].
PI literature examining disclosure behaviors, preferences, and concerns, as well as the privacy calculus that can inform these decisions, provide a useful base for our analysis. As we discuss later, we draw from this literature the importance of understanding how disclosure actions and choices influenced our participants’ experiences of privacy.
Beyond disclosure and privacy calculus, HCI researchers have considered the contextual nature of privacy. Based on Altman’s theory of privacy regulation theory [2], Palen and Dourish frame privacy as a dynamic, negotiated practice of managing different contextual boundaries [54]. Researchers also examined privacy using Nissenbaum’s “contextual integrity”, which assesses whether the norms of information flow have been contextually violated [7, 50, 51]. Our work supports these orientations towards privacy as a contextual phenomenon; however, we broaden the scope of inquiry around practice to consider a set of transparency experiences.
METHOD
The data analyzed for this study was collected during a three-phase co-design project with people who have been diagnosed with BD. The focus of these sessions was to identify ways to improve the display and usability of self-tracking data for people living with SMI. Along with visual elicitation activities (reported and discussed elsewhere [69]), we conducted a series of interviews with participants that focused on their personal experiences with BD and how they self-monitor their mental health. This paper reports the results of a two-phase qualitative analysis of the interview transcripts in order to identify privacy-related requirements for collective PI systems for BD. Throughout the interview process, participants regularly mentioned decisions, intentions, concerns, and experiences related to the (in)visibility of their illness, both in general and as potentially facilitated by self-tracking artifacts. Our goal in focusing on issues of information sharing and access was to better understand the mechanisms of (in)visibility that need to be supported by the privacy features and functions of our collective PI system. These analyses provide the foundation for a model of privacy through the lens of transparency experiences.
Statement of Ethics
In working with vulnerable populations, our research group explicitly values the multiple types of expertise held by our participant collaborators. The primary goal of qualitative data elicitation activities was to provide a platform for our participant collaborators to share their experiences, insights, concerns, and ideas. Our study was reviewed and approved by the University of Washington’s Institutional Review Board (IRB). In order to minimize potential harms to participants, our work in developing a PI system to support the management of BD has been conducted in continuous collaboration with participants and clinical experts. Throughout their engagement with the study, participants were reminded of the voluntary nature of answering questions and performing design activities. Regardless of level or duration of participation, all participants who came in for a session received full incentive payment of $25 plus $5 for travel expenses.
Participants
We worked with fourteen individuals (5 male, 9 female, 20 to 64 years old, average age 45.9), all of whom self-reported that they: (1) were over the age of 18, (2) had an existing diagnosis of bipolar disorder, and (3) had not been hospitalized for mental health issues in the last six months (i.e., were stable at the time of interviews). Participants were recruited via local community organizations such as the National Alliance for Mental Illness (NAMI) local chapter, the Depression and Bipolar Support Alliance (DBSA), the Institute of Translational Health Sciences (ITHS) patient recruitment service, and materials distributed through campus health care clinics and email listservs. We also invited participants to share information about the study with their personal social networks. Each participant joined us for a series of interviews and design activities that helped us learn more about their experiences with BD.
One potential limitation of our work is that those individuals who volunteered to talk candidly about their experience with BD for our study may tend towards more “visible” privacy practices in relation to their mental illness status and history. While we recognize this potential bias, we do not come to conclusions in our analysis about the overall distribution of (in)visibility tendencies amongst individuals with BD. Rather, we identify generalizable transparency factors and suggest design approaches that allow for flexibility in supporting a wide range of transparency experiences.
Data
The first two co-design sessions were semi-structured protocols focused on personal experiences with BD, self-monitoring practices, and attitudes towards information sharing. Because the protocol for the third session centered on reactions to visual motifs to represent the experience of BD [69] and not experiences managing BD, we focused our analysis on the first two semi-structured interview transcripts.
During interviews, we asked participants about their experiences with BD, starting from either the first appearance of symptoms or their formal diagnosis. We prompted them to share how they check in with themselves over time, how they know if they are doing okay, and what triggers they regularly monitor. We also asked about whether they tended to share these observations with anyone, how they decide with whom to share information about their mental health, and if their sharing preferences or practices have changed over time. Although we did not ask specific questions about privacy practices, data protection, privacy preferences, or disclosure concerns, all of these ideas were mentioned throughout the interviews.
The two semi-structured interviews produced 28 recorded interviews. Interviews were initially transcribed using an automated transcription service1. The research team manually verified and de-identified each transcript.
Analysis
Qualitative analysis was performed in two phases. First, we used discourse analysis to identify transparency experiences described by participants, which are experiences that foreground the visibility or invisibility of participants’ mental health states or conditions. Next, we conducted a thematic analysis of these experiences to understand the factors that influenced the transparency of participants’ mental health status.
The first analytic task involved locating all passages in the transcripts which contained a participant’s description of transparency experiences. Transparency experiences were described by participants as situations when the (in)visibility of a facet of their identity, such as medical history, physical appearance, or personal experiences, played a significant role in their ability to intentionally share or conceal their mental health status. Participants communicated their experiences with information sharing through discursive markers [28] such as language, framing, and context, so we used discourse analysis [30] to formally identify these passages using an iterative and inductive process.
Coding was led by the first author who periodically verified results and resolved discrepancies with the last author, who is trained in discourse analysis. Ultimately, the coding scheme was validated through discussion amongst the full research team, a process referred to by Johnstone [30] as achieving evidentiary warrant through arguing to consensus.
Discursive markers [28] of transparency experiences included:
Topic/object: The object of a transparency practice that is made (in)visible. Objects included digital information, photos, medical history, aspects of identity, etc.
Social actors: Other people involved in a transparency experience. Actors included clinicians, family, friends, colleagues, roommates, no one, etc.
Activities: Things being done through transparency language, such as sharing, revealing, masking, synchronizing, building empathy or rapport, etc.
Outcomes: The result of a transparency practice. Outcomes included changing medication, hiding a secret, cooperative work, achieving goals, etc.
The second analytic task identified and categorized the influences that impact transparency experiences. We conducted an inductive thematic analysis [6] of the transparency experience discourse segments. This task was led by the first author who identified initial transparency factors. In a process that followed the same validation principles as the initial discourse analysis, the third and fourth authors applied these codes to subsets of the data. Results were compared, discrepancies were reconciled through discussion, and the codebook was revised. This iterative process continued until all data were coded to the satisfaction of the analysis team. Next, we describe each of the five transparency factors that were identified through the thematic analysis of transparency experiences.
RESULTS
Our analysis of the factors that shape transparency experiences revealed complex, multi-layered systems of interdependence. We found that the ways in which a characteristic, emotional state, behavior, or diagnosis are visible to others influenced an individual’s ability to exercise intentional disclosure of mental illness. In other words, certain circumstantial realities mediated our participants’ ability to actively control their privacy. For example, the highly visible nature of rapid speech were inherently more visible than internal experiences of chronic anxiety and depression. The (in)visibility of BD experiences played a significant role in participants’ abilities to manage personal information.
In this section, we report how different factors impacted a participant’s ability to exercise intention and agency when revealing or concealing aspects of their mental health status. This list is not meant to be exhaustive or mutually-exclusive but rather to make important analytic distinctions related to people’s experiences. These factors were entwined in participants’ narratives; however, by disambiguating five main themes across these accounts, we are able to see how a transparency lens can inform the design of PI systems for SMI.
Personal factors
Personal factors refer to the individual or local qualities that impact the transparency of a person’s experience with BD. This was the most prevalent factor; a non-exhaustive list of examples includes: personality attributes (e.g. being shy or open), individual goals and expectations, abilities or skills, and personal histories and experiences.
Individual BD symptoms modulate transparency
Understandably, the symptoms of BD experienced by an individual played a substantial role in whether a participant’s status was visible. During manic phases, a few participants described feeling over-exposed, as P13 describes: “when I came out of the school and I was manic, I was like, everyone should know everything”. Manic symptoms also obscured or distorted a participant’s ability to communicate what they felt was an authentic presentation of how they felt in a given moment. P13 described explaining during a clinical visit that “[the symptoms are] not presenting the way that you’re feeling. That… can be very frustrating”.
A common transparency experience reported by our participants was physical isolation during an acute episode. Participants described “going into hibernation” (P09) or “detaching” (P06) from other people “when things get rough” (P09). Participants offered a number of reasons for these experiences, for instance to shield themselves or others: P06 said, “I don’t want it to be worse by other people being involved”.; P13 said “I’m avoiding somebody because I know I’m being manic and I don’t want to freak somebody out”; P06 recalled trying to explain to a close friend why it seemed like they were doing well most of the time, despite serious struggles with their mental health: “I’m not around you when I’m not fine”. Several participants described working to re-establish visibility with family and friends after such periods of isolation. P09 described letting people “know that I still love them but this is what happened, I’m dealing with it and your support would be nice”. Here, lack of visibility is not adequately described by the choice or preference to disclose that is typically addressed within conventional privacy frameworks. Rather, these experiences reflect situations that are too intense to manage with intention, so participants felt a need to withdraw.
Maintaining appropriate transparency with care network
On the other hand, most participants found that maintaining continuous visibility for some members of their support systems was a critical and necessary tool for managing mental health. Several participants scheduled regular check-ins with people, such as P20: “I check in with my parents and I check in with my close friends. And my close friends, my two best friends have been also kind of keeping track and I really open up to them because I’ve always been bad at communicating with people”. Others created more formal awareness plans with friends and family, such as P19: “if you see me, you know, very hyper… you gotta call my psychiatrist…, if not take me to the hospital”. Because of these “all-or-nothing” approaches to revealing emotional states, check-ins often required participants to let loved ones see the full reality of symptoms, whether or not participants preferred to do otherwise.
Supporting prior work [19, 52, 69], participants were selective with whom they shared their diagnosis; this often coincided with people (such as roommates, close family members, or colleagues working in close proximity) who were in positions to witness evidence of struggles with mental illness. Because of concerns about gaslighting and the sheer labor required to mask visible symptoms, P13 discloses their diagnosis immediately to potential romantic partners: “when it’s somebody that like I’m on a date with or getting involved with, I tell them right away. I don’t think it’s fair to do anything else”. P09 does not not make an effort to hide from people that are close to them, both emotionally and in physical proximity: “when it comes to my friends, like if it’s a casual friend, no — but [I would tell] someone I live with or a best friend”. The ability to be visible to certain people (or to not feel the need to conceal visible aspects of mental illness) is not just a desire for increased intimacy with close ties or maintaining space with weak-ties (a disclosure-oriented framing of privacy), but a critical component in their mental health self-management plans, involving directing energy away from vigilant concealment and towards important practices of self-care (a transparency-informed approach to sharing personal information).
Self-articulation is an ongoing process
A majority of our participants experienced difficulty intentionally articulating and sharing their experiences with BD at some point. For many, this was related to the protracted experience of diagnosis, which for BD takes an average of ten years [70]. For many, symptoms were visible long before they received an official diagnosis. As a result, our participants described adapting to the ways in which their mental health status revealed or hid itself at times. This included not understanding how to communicate with doctors (P05) or not understanding how to explain the nuanced or less visible aspects of extreme mood fluctuation (P07, P20). P18 succinctly ties together the ability to articulate with making their experience visible: “being able to explain it better to myself subsequently helps you explain it better to other people”.
Risks of exposure
Many participants were highly aware that witnessing mental illness can be overwhelming. They described the ways that they mask intense experiences like suicidal ideation or selfharm. As P18 described, it takes “an active effort to just keep myself being like a normal person”. Exposure to realities like these, experienced by many as part of BD, can be overwhelming to loved ones as P09 described, “when I want to [update friends and family] it’s like a huge avalanche of stuff to talk about but then they worry that I’m talking about it a lot”.
Conversely, when establishing new relationships with clinicians or friends, recounting personal mental health history can be emotionally draining. As P07 described, “I had to give my case history to this new [therapist] and… to explain… like 45 years of my life to get me to this point, you know, I’m in tears when I talk about this thing and you have to do it in a couple of sessions. I don’t ever want to do that again if I don’t have to”. This example highlights risks not in the outcome but the process of revelation.
Participants often responded to tensions between revealing and concealing their condition by orchestrating mixed or partially obscured visibility states. For instance, P05 described revealing just enough of their experience with SMI to provide context to their child without providing so much that they were overwhelmed: “I’ve never told [them] anything about what was going on with me until I had OD’s in 2014 and I told [them] and I said the reason why is because I didn’t want to burden you”. In this case, the event of a highly visible overdose pushed the participant to explain to their child what was happening. This is not best explained as an intentional act of disclosure, but as a contingency response to having a personal experience exposed. Achieving these mixed states provided some participants with a sense of control over their condition without worrying their care network. However, partial visibility was consistently described as being a response to situations that simply could not be concealed.
Audience factors
Audience factors are qualities of a witnessing individual or group that influence the transparency of mental illness.
Familiarity with BD can impact care capacity
Perhaps the most common audience factor that mediated (in)visibility was a witness’s familiarity with SMI. Selfmanaging BD requires continuous work, including selftracking, communication, and collaborative sensemaking. Transparency of mental health status was often based on whether an audience was capable of making sense of BD behaviors and moods as well as joining in that care work. As P19 described, “I’m totally open with both my sisters. My younger [sibling], [they are] an RN so [they are] kind of the one that helped me get to the hospital; so [they] manage my healthcare. And then [my older sibling] helps me out with my day to day what’s going on”. Other participants described being more comfortable talking about mental illness with therapists who were trained to respond appropriately (P09: “[depression is] mostly something that I would want to talk to a therapist about”), or if the person had prior experience with mental illness (P12: “if there is someone else who has a family member with bipolar, I’m more likely to talk about [it]”.
In this way, (in)visibility is not just a function of revealing or concealing an experience with mental illness, but it also includes the ability to make sense of what is seen. For those without training or personal experience, the most overt and perceptible behaviors associated with BD such as extreme irritability, delusions of grandeur, or debilitating apathy and depression, can be seen but not helpfully engaged.
Translating symptom recognition into support response
Most participants prized the judgement and collaboration of capable and trusted people – individuals who could see, interpret, and act on the participant’s mental health status. For example, P20 explained, “I’ve told my friends… if you notice anything, please tell me because I’m just trying to be normal”. P19 described what it looks like when collaborative sensemaking works: “A lot of people give me input and say, you know, this is what it is. Then I can talk to my therapist or talk to my psychiatrist about that and they can go yay or nay”). One participant said that a lack of corroboration can be frustrating (P05: “I like therapists who give me feedback and [they would not] and it drove me crazy.”). Our participants valued competent caregivers who had the ability to witness and validate their experiences, and help translate those insights to action.
Having an audience helps authenticate experience
Finally, there were cases where a lack of audience was significant. Without a capable witness, participants were unable to verify or validate the parts of their experience that lay outside their internal mental landscape. Sometimes this was because they had a sparse social network (P12: “I didn’t have any of those sisters or friends”), though this could also be due to a lack of capable ties (P12: “a big part of some of my mental health issues was the fact that if you can’t talk about the things you are upset about, like it’s taboo or your parent isn’t in a position to talk to you about it”). In both cases, not having a witness is highly consequential. P18 put this most succinctly: “this is a thing that I’ve been carrying around for a long time and I’d never disclose it to anybody and it’s getting kind of a lot to carry”. Where the idea of disclosure practices imply a recipient, transparency allows us to appreciate how a lack of visibility can impact caregivers’ abilities to witness and validate the experiences of people with BD.
Interface factors
Interface factors are qualities of the mechanisms that mediate or govern transparency, such as technologies, policies, and laws. In this sense, we are using the term interface to draw attention to the technological affordances, constraints, and mechanisms that modulate visibility in sociotechnical systems. In contrast to the activity of disclosure, this theme highlights the state of being identifiable or obscured. For example, while electronic health records (EHR) capture many details about a person’s conditions, they do not shed light on details of personal narrative that might reveal information about a patient’s values or personal goals. In this way, the EHR makes visible certain details about a person’s health history while obscuring other, lived aspects about personal history.
Digital features can afford or obscure visibility
A few participants described how some forms of technology enabled transparency or visibility more than others. P02 described differences between texting and calling their therapist: “It’s almost like text is less urgent than calling him … so, if I call him it’s like I’m on my way to the hospital right now because I was about to kill myself … But if I text him, it’s like, uh, y’know, after whatever appointment he’s in then he’ll check it”. In this case, a phone call accompanied an acute episode of suicidal ideation. On the other hand, a text corresponded to less obvious daily challenges that might go un-noted by people unfamiliar with P02’s personal history.
Similarly, P03 described how a lack of contextual cues like body language on Facebook made them less likely to feel seen on these platforms. For some this was a benefit, enabling them to shield loved ones from explicitly witnessing their mental illness. For others, this was barrier to communicating the extent of their illness, forcing them to verbally articulate their emotions rather than having a friend or family member see that they were not able to take care of themselves. These examples implicate both the affordances and the constraints of communication technologies in making a participant’s experiences with BD (in)visible.
Procedural standards highlight and hide BD experiences
Subtle forms of interface factors mandate (in)visibility as a result of regulations and policies. For instance, almost every participant described their fatigue and frustration about repeatedly filling out forms or being subjected to intake interviews as part of clinician visits. These forms and interviews are aimed at making patients, their symptoms, and histories systematically legible to clinicians. Similarly, lawyers and social workers make a client’s situation legible to courts, such as when P05 entered her child into foster care. While these types of standardized forms can show mental health experiences in comparable formats, heightening the visibility of marginalized populations in some ways, they can also omit idiosyncratic or deeply personal experiences.
Related to legal procedures, policies related to identity can also impact (in)visibility for vulnerable populations. P06 describes how they used Facebook until their account was suspended pending identity verification: “[Facebook] demanded that I provide them with my personal identification and if the name on my ID did not match the name on my account that I would have to change my Facebook account to that name”. P06 declined to provide verification and stopped using Facebook.
Focusing on the transparency enabled by various interfaces helps us see how standardization embedded in many policies and technologies can dictate a person’s (in)visibility. Models of privacy centered on disclosure may miss the ways in which sociotechnical systems compel (in)visibility or reduce them to idiosyncratic concerns and preferences. For many participants, mandated exposures mediated by standardized interfaces reduced their sense of agency and control over the transparency of their experiences.
Externalization factors
Externalization factors refer to the innate transparency of a particular experience. For instance, rapid speech is a visible experience of BD because it manifests as external phenomena. Conversely, internal experiences of BD (such as irrational thinking) tend to be invisible. Whether an experience was external or internal modulated the transparency of the experience in ways that are difficult to control.
Outward-facing signals of SMI
P18 describes how the size of a community made their externalized experiences more visible: “we all live in a very small community so you can kind of see all of [these concerning behaviors] happening”. P03 wryly pointed out “I was in the psych ward. That was a pretty clear indicator [something was not right]”. The (in)visibility of an experience is as much a function of the environment in which it exists as it is about the inherent visibility of an experience.
This was a particular concern for some of our participants whose mood fluctuations were both extreme and highly perceptible (i.e., accompanied by changes in speech patterns, increased irritability, weeping, etc.). Participants described the innate visibility of externalized bouts of crying: (P05: “You can’t really hide [from a partner that] you’re going into the bedroom and balling”). Likewise, manic episodes were visible through pressured or rapid speech or singing (P13: “I was talking very rapidly and singing songs and just stopping speaking and just singing”). Co-morbid conditions can also be highly visible, contributing to overall assumptions about a person’s mental health status (P18: “I tell [people] I have OCD because I’m much more open about OCD, everyone’s like, well, that’s not a surprise at all.”).
Inward personal experiences
Conversely, there are some internal aspects of mental illness that can afford a degree of visibility but remain opaque to some. While familiarity with mental health issues helped contextualize experiences of mental illness (see: Personal Factors), trying to integrate the visible and the invisible aspects of their experiences for others was taxing for many participants. Putting an emotional experience into words is a difficult task, let alone one as complex as BD. P06 described the difficulty of using words for such a nuanced and intense experience: “The most that I’ve tried to explain that anyone can relate to is that they can say oh I’ve had anxiety attacks before, and this is not just anxiety and I wouldn’t be living the way I do and trying to scrape [by] and feed my cat and stuff if I could function better and have my medication”.
The innate (in)visibility of BD experiences, what we refer to as externalization factors, is not always in the control of the person with the mental illness. The inherent difficulty of articulating and expressing some experiences demonstrates how (im)material qualities of an experience determine the transparency of an experience.
Societal factors
Social norms and conventions around mental health played both direct and indirect roles in the transparency of a participant’s mental illness. This is related to work that looks at how social values impact visibility, such as “passing” for white in apartheid South Africa [9], being transgender [8] or black [12] at U.S. airports post 9/11, or bisexual [29, 61] and queer erasure [39, 67]. Because ruptures with social norms were highly visible, the experience of being visibly “normal” (P01, P05, P18, P20) meant performance of or adherence to social norms and practices.
Social stigmas
Many participants described hiding parts of their experience due to feelings of shame or fears of being diminished: “So what [my sibling] does [when I tell them about depression] is laughs or tells me to stop crying and stop being such a drama queen” (P09). P03 described the reaction of their co-workers to their BD diagnosis: “I went to a job and I said, ‘Guess what? I have a mental illness. I didn’t even know that. That’s why I was out of town.’ And they said, ‘Oh, no! Maybe you’ll HURT the children!’ and I was really shocked”. This supports prior work about how social stigmas around mental health impact disclosures [4, 56].
However, a few participants showed resilience in the face of stigma, such as P03: “Sometimes I use mental illness to shock people”. P05 described how visibility can be a means for activism, despite negative reactions: “I’ve always been very open about my mental illness, sometimes that’s backfired, sometimes it wasn’t a good thing…. I think people need to be educated, so anything to help with it”.). Participants also encountered accepting environments, such as P05: “I was hospitalized and instead of telling my boss I was hospitalized I used my vacation time and it was afterwards I told her and she said no you don’t use vacation time for sick time and they were supportive”). While social stigmas strongly influenced our participants transparency experiences, they did not always lead people to hide.
Social norms spotlight transparency factors’ complex links
As an example of how our transparency factors may be entwined, we consider how P07 invokes specific social norms (parent-child relationships), a specific audience (their parents), and externalization factors (suicidal ideation) to produce a specific configuration of transparency. In talking about symptoms of depression with their parents, P07 reported “I don’t think any parent wants to hear [about suicidal ideation] especially from a child”. For P07, adherence to perceived norms in social relationships meant concealing their painful experiences from a particular audience (their parents). The internal nature of suicidal ideation provided affordances and constraints for making this ideation visible (e.g. disclosure or behavior). These factors combined to produce a set of possibilities that afford and constrain the transparency of P07’s experience.
Finally, the existence of intense social stigmas complicates understanding privacy as solely a matter of disclosure. This is exemplified by P09: “I prefer to be open and honest, not with people I meet on the street or co-workers because unfortunately we’re still in this stigmatization of mental illness”. Throughout our results, participants expressed a desire and a need to make themselves visible to other people. However, this is situationally impractical precisely because of social perspectives around mental illness. To view privacy in this case as a matter of preference is to focus on the behavior and perspectives of social actors (i.e., sharing preferences) instead of the context against which these behaviors and perspectives take place. Thinking about transparency allows us to consider a specific type of contextual background, since being transparent is as much about the object’s environment as the object itself. This allows us to begin to trace the contours of how and where social values (including but not limited to stigmas of mental illness) can be addressed in system design.
DISCUSSION
Limitations
Our study was done with a small sample size, which, as we mentioned, might have been biased towards individuals who are more “open” or “visible” about their experience with BD. The interviews we analyzed were also not structured specifically to interrogate privacy practices. We do not claim that the perspectives around transparency discussed by our participants generalize across all people managing BD or serious mental illness; however, in this discussion we identify ways that they can inform useful guidelines for designing social PI systems for this population and provide a foundation for future co-design work in this space.
Implications for Privacy-as-Disclosure
Designing for vulnerable populations can help to highlight normative contingencies in existing systems. This can help deepen our understanding of a multi-dimensional concept like privacy and lead to the design of more accessible, inclusive, and thoughtful computing systems.
Our analysis revealed continuity with disclosure-centered privacy frameworks. Participants described preferences regarding with whom to share or hide things, in line with prior work on disclosure preferences within PI systems [4, 19, 25, 33]. Similarly, they expressed concerns for over-sharing, such as the consequences of disclosing mental illness in the wrong context [4, 19, 25, 33], or mentioned variations of a privacy calculus [36] to determine whether they shared or withheld information in particular moments.
However, our focus on transparency helped identify gaps in disclosure-centered frameworks of privacy. For instance, privacy-as-disclosure presumes the ability to externalize personal experience through intentional disclosure. However, most of our participants reported difficulties in articulating their experiences of SMI and fatigue as a result of constant vigilance against misinterpretation or misunderstanding. Focusing on disclosure can omit the critical step of articulation and interpretation that is necessary for intentional disclosure.
Focusing on disclosure also prevented us from examining passive experiences of privacy. For our participants, privacy was not always modulated by action, but rather by being in a particular way, in a particular place, at a particular time. Framing privacy through disclosure also constrained our ability to examine experiences of privacy where a participant had little choice. For example, experiences with visible symptoms, hospitalization, or court proceedings showed how privacy was in many ways impacted by the specific contextual affordances.
Our participants’ experiences were more closely aligned with models of privacy as a contextual, collective, ongoing social practice [22, 54]. Applying a context-based framework similar to contextual integrity allows us to examine our participants’ experiences without requiring the presence of action or choice. However, contextual integrity focuses on flows of information among actors and through different spaces, looking at these flows from a systems perspective, where our examination takes place at the level of interpersonal dynamics. Instead of focusing on information flows against social norms, the concept of transparency experiences foregrounds how the affordances of (in)visibility either support or mitigate agency in the sharing of personal data. In this context, transparency refers to the degrees to which a person’s presence, identity, state, or other personal characteristics are discernible against a contextual background. Situations or circumstances that are difficult to hide have a higher degree of transparency, in contrast to situations that are more difficult to reveal with lower degrees of transparency.
Thinking beyond negative privacy
Privacy, especially in HCI literature, is often viewed as a protection, primarily focused on keeping bad actors out. In many cases, this is a useful framework, and indeed our participants described ways that they keep secrets. However, they also described many ways in which notions of positive privacy were required [16]. For our participants, sharing information was a crucial cog in their health care plans. From regular check-ins with family members and roommates, to updating clinicians on changes in mood or behavior, to re-connecting with loved ones after a strong episode of depression, giving out information was as consequential for both emotional and physical well-being as keeping it private.
Transparency Factors can be multiple and multi-leveled
Transparency factors were often entangled throughout a given experience of disclosure. While some experiences had singular factors (i.e. P18 disclosing OCD because of their inability to hide symptoms), most others practices were informed by multiple factors. For instance, the (in)visibility of mental illness to work colleagues was informed by qualities of the audience (e.g., how adept are they at identifying behavior related to SMI?), qualities of the surrounding societal factors (i.e., what constitutes “normal” behavior in the workplace?), and personal factors (i.e., does this person want to share their experience with mental illness?). Understanding how these different factors overlap with each other can help inform the terms of an individual’s privacy calculus [21, 36, 37].
Furthermore, transparency practices seemed to be hierarchical or nested; while a person may engage in a privacy behavior, and that privacy behavior occurs because of some kind of privacy preference, these preferences occur within certain conditions of possibility. Participants may be more open with other people about their mental illness, but that motivation can be backdropped by motivations to subvert social stigmas around mental illness (e.g., shocking people), or by a personal history with unsupportive family members. What we begin to see is a complex, multiple, and hierarchical web of factors that motivate transparency experiences. Future work will explore how configurations of factors (nested, multiple, hierarchical, etc.) impact privacy behaviors and preferences.
Social actors play a role in modulating (in)visibility
Extending beyond intended recipient(s) of a disclosure allows us to see how other social actors can diminish or amplify a participant’s transparency to a broader audience and to the participant themselves. Our results suggest that privacy preferences may not be entirely individual, but to an extent socially-negotiated. Several participants described maintaining continuous visibility to their care team and loved ones, often in the form of “check ins”. These check-ins are informed by both individual preferences of the participant (e.g., a person’s desire to share) and of the audience (e.g., a person’s desire to see). This suggests that preferences are in fact not individual but the result of configurations of transparency factors. For instance, many individual preferences for sharing information about managing BD were informed by some combination of social stigmas around mental illness (societal factors), an audience’s experience in dealing with SMI (audience factors), whether management can even be hidden or revealed (externalization factors), or mandated visibility due to legal or policy requirements (interface factors). Therefore, one way to consider privacy in social PI systems is to consider how and what preferences are being socially-negotiated, and provide the means to collaboratively resolve these visibility settings.
DESIGN IMPLICATIONS
We provide recommendations for designers adding social features to health-related personal informatics systems or creating tools for collaborative care contexts. These are particularly relevant for designers working with vulnerable populations or within the domain of mental health.
Create contextual privacy settings
We recommend that indicators and controls related to data access in social PI systems be lightweight, integrated into the context of application use, and persistent throughout interfaces that support PI stages. Our findings support prior work around the multi-dimensional nature of privacy [7, 22, 50, 51, 54] and the need for more context-adaptive technologies [66]. This suggests bringing data privacy related controls into the context of application use, rendering controls that are relevant and actionable to the user at a specific time [65].
This might be accomplished through operationalization of transparency factors into PI systems. By identifying assemblages of data that represent different factors, PI systems could integrate privacy settings with relation to the contextual discernability of experiences instead of pre-set disclosure preferences (common in social media [41]).
As an example, we might examine the episodes of isolation many participants reported in response to acute symptoms of depression (reported as a reduction in visibility). Transparency of this state might be determined by assessing externalization factors (e.g., has this person told anyone, or are they exhibiting observable behaviors related to acute depression?), audience factors (e.g., who is likely to correctly interpret this experience, or who is likely to respond productively?), personal factors (e.g., what degree of visibility does this person desire in this situation?), interface factors (e.g., through what mechanisms can visibility be provided?), and societal factors (e.g., to what degree does this experience align with social norms?).
Reconsider role-based access control
We recommend that social PI system designers reconsider role-based access control paradigms [63] and explore encoding more complex data transparency relationships in database schemas. We observed that the ways that individuals experienced transparency were not always binary (visible or not visible). Due to various constellations of factors, people often obfuscate, brush over, or use metaphors to speak about their experience with SMI, for instance revealing just enough to explain absences at work or mildly unusual behavior while the more stressful aspects of managing SMI are concealed.
While it is challenging to design system controls that holistically handle the nuance of this intentional vagueness, we can use our results to design access control mechanisms that are more dynamic and flexible than traditional role-based schemas [68]. Access controls typically consider what data types a user can or cannot see through mechanisms of disclosure. We suggest that systems should instead consider how transparently a user can see what data type over what duration, and when. In the visualization of data, representing these “transparency levels” can be accomplished via fuzzing or aggregation of personal data over a time period, similar to the techniques proposed by Epstein et al. [23]. We recommend allowing people to adjust the level of transparency with which their data is shown via lightweight controls or sliders.
There are also opportunities for considering temporal factors related to data transparency. The level of fine-grained access that individuals allow others to have might change over time and warrant mechanisms for manual control. Or, systems might automatically decay the transparency of data access over time if no interaction occurs between two people. Social PI systems do not necessarily need to follow the disclosure-based model of social media platforms in which shared information can be persistently accessed by those it was shared with [10].
Designing for continuous data access
Designers should consider how to provide contextual privacy controls and mechanisms for negotiation of continuous data access. We recommend designing across discrete instances of sharing qualitative information through to continuous instances of sharing of real-time streaming sensor data. For example, providing indication of the “views” that various people have over data types and time intervals is essential in the continuous sharing case, as is allowing people to adjust shared data in a post-hoc manner.
There are also opportunities to support moments of negotiation within a continuous sensing paradigm. These moments might occur when, for example, an individual with BD feels like their behaviors, symptoms, or needs are visible but being dismissed. The system might support ways for them to call attention to moments in the data or significant events. Similarly, support system members might notice patterns that motivate them to reach out to check in or get more information.
Designing for continuous data access at a specified level of transparency is important given our participants discussion of how continuous visibility is an important tool for condition management. Continuous data access also might reduce the burden on the individual with SMI related to the work of managing discrete instances of disclosure and the general pressure to keep people in their lives up-to-date.
Design appropriate defaults
Due to the interdependence of factors which impact transparency experiences, allowing users agency to configure transparency of data types for different audiences is essential. However, requiring people to configure all of these settings might be burdensome. We see transparency factors as potential guides for how to construct appropriate defaults. One way to identify appropriate defaults is to assess a person’s openness about their mental illness during on-boarding. Later this can be used to suggest sharing configurations as other users are added or connected to the system. Finding out what stage of a mental health management plan an individual is currently experiencing during system use might also help inform recommendations. For example, if an individual has just been diagnosed, they might want to keep their therapist/doctor up-to-date with how medications are working or have more frequent check-ins with family members.
Support collective data practices
Co-tracking scenarios (i.e., multiple people tracking one person or multiple people tracking themselves and comparing to each other) warrant further investigation in the context of BD. Our participants described how helpful outside feedback can be for individual sensemaking. Allowing select support network members to co-track might also provide additional certainty to stochastic models.
Our analysis also suggests the importance of providing education and empathy building around BD for support system members, so that they become more capable audience members. Systems could structure educational moments for support system members, as well as encourage them to ask questions or check in. Systems should also allow support system members to communicate with each other, as this sort of visibility amplification and sharing of information can sometimes be an expected and essential part of care management.
Given the positive benefits of social support networks, invisibility can have negative consequences in the context of managing chronic illness. Designers should be aware that not all people managing chronic illness have robust social support systems. While it is common to think about designing for strong and weak social ties, these ties are not always available or appropriate. Not all support system members are equipped or suited to serve as collaborators. One way for systems to scaffold a lack of social support is by connecting PI systems with existing online forums [19], such as the subreddit r/bipolar (mentioned by several participants). This sort of openness must be managed with precautions for keeping sensitive data from being unintentionally disclosed on general purpose online forums or social networking sites.
Include mechanisms for negotiation of data access
We recommend designing to support mechanisms for collaborative and/or coordinated negotiation of data access. Audience members should not be considered passive recipients of data, but rather active participants in modulating transparency. The mechanisms within a system related to data sharing instances or adjustments should mirror the practice of “check-ins” discussed by our participants and allow for negotiation between people about the degree of access granted. For example, a support system member might request access to the past month of a person’s health-related data. This request could initiate a system-supported negotiation around this data access. The individual with BD might be able to then review and adjust data before sending it by filling in missing pieces, obscuring sensitive or inaccurate data points, or adjusting the level of fuzziness or aggregation with which it was shared.
CONCLUSION
While privacy research in HCI often centers conceptually around disclosure, this framework does not always map cleanly into on-the-ground privacy experiences. Using the lens of transparency, we provided a descriptive case study of privacy practices of people managing BD interpreted as transparency experiences and transparency factors. Our results contribute to a growing body of literature calling for more contextually-grounded case studies of privacy practices [7, 22, 54]. Furthermore, our interpretive framework of transparency combined with the unique experience of people managing BD help foreground hidden contingencies in conceptions of privacy-as-disclosure. Our hope is that social PI systems designers consider the strengths and weaknesses of a given interpretive framework of privacy, and choose their frameworks for understanding privacy carefully, considerately, and intentionally.
CCS Concepts.
•Human-centered computing → Collaborative and social computing theory, concepts and paradigms; •Security and privacy → Social aspects of security and privacy;
ACKNOWLEDGMENTS
We would like to thank our participant collaborators for their enthusiastic engagement. We also thank Caitie Lustig, Beck Tench, Florian Schaub, and Yixin Zou for their input throughout this project. The University of Washington’s Royalty Research Fund provided support for this work (65-6521).
Footnotes
REFERENCES
- [1].Acquisti Alessandro, Adjerid Idris, Balebako Rebecca, Brandimarte Laura, Cranor Lorrie Faith, Komanduri Saranga, Leon Pedro Giovanni, Sadeh Norman, Schaub Florian, Sleeper Manya, Wang Yang, and Wilson Shomir. 2017. Nudges for Privacy and Security: Understanding and Assisting Users’ Choices Online. Comput. Surveys 50, 3, Article 44 (August. 2017), 41 pages. DOI: 10.1145/3054926 [DOI] [Google Scholar]
- [2].Altman Irwin. 1975. The Environment and Social Behavior: Privacy, Personal Space, Territory, and Crowding. Brooks/Cole, Monterey, cA, USA. [Google Scholar]
- [3].Andalibi Nazanin and Forte Andrea. 2018a. Announcing Pregnancy Loss on Facebook: A Decision-Making Framework for Stigmatized Disclosures on Identified Social Network Sites. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 158, 14 pages. DOI: 10.1145/3173574.3173732 [DOI] [Google Scholar]
- [4].Andalibi Nazanin and Forte Andrea. 2018b. Responding to Sensitive Disclosures on Social Media: A Decision-Making Framework. ACM Trans. Comput.-Hum. Interact 25, 6, Article 31 (December. 2018), 29 pages. DOI: 10.1145/3241044 [DOI] [Google Scholar]
- [5].Andalibi Nazanin, Morris Margaret E., and Forte Andrea. 2018. Testing Waters, Sending Clues: Indirect Disclosures of Socially Stigmatized Experiences on Social Media. Proceedings of the ACM on Human-Computer Interaction 2, CSCW, Article 19 (November. 2018), 23 pages. DOI: 10.1145/3274288 [DOI] [Google Scholar]
- [6].Aronson Jodi. 1995. A pragmatic view of thematic analysis. The Qualitative Report 2, 1 (1995), 1–3. [Google Scholar]
- [7].Barkhuus Louise. 2012. The mismeasurement of privacy: Using contextual integrity to reconsider privacy in HCI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 367–376. DOI: 10.1145/2207676.2207727 [DOI] [Google Scholar]
- [8].Beauchamp Toby. 2009. Artful concealment and strategic visibility: Transgender bodies and US state surveillance after 9/11. Surveillance & Society 6, 4 (2009), 356–366. DOI: 10.24908/ss.v6i4.3267 [DOI] [Google Scholar]
- [9].Bowker Geoffrey C. and Star Susan Leigh. 2000. Sorting things out: Classification and its consequences. MIT Press, Cambridge, MA, USA. [Google Scholar]
- [10].boyd danah. 2007. Why youth (heart) social network sites: The role of networked publics in teenage social life. MacArthur Foundation series on digital learning–Youth, identity, and digital media volume 119 (2007), 142. https://ssrn.com/abstract=1518924 [Google Scholar]
- [11].Brashers Dale E., Goldsmith Daena J., and Hsieh Elaine. 2006. Information Seeking and Avoiding in Health Contexts. Human Communication Research 28, 2 (01 2006), 258–271. DOI: 10.1111/j.1468-2958.2002.tb00807.x [DOI] [Google Scholar]
- [12].Browne Simone. 2015. Dark matters: On the surveillance of blackness. Duke University Press, Durham, NC, USA. [Google Scholar]
- [13].Cheney-Lippold John. 2018. We are data: Algorithms and the making of our digital selves. NYU Press, New York, NY, USA. [Google Scholar]
- [14].Chung Chia-Fang, Agapie Elena, Schroeder Jessica, Mishra Sonali, Fogarty James, and Munson Sean A.. 2017. When Personal Tracking Becomes Social: Examining the Use of Instagram for Healthy Eating. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17). ACM, New York, NY, USA, 1674–1687. DOI: 10.1145/3025453.3025747 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Chung Chia-Fang, Dew Kristin, Cole Allison, Zia Jasmine, Fogarty James, Kientz Julie A., and Munson Sean A.. 2016. Boundary Negotiating Artifacts in Personal Informatics: Patient-Provider Collaboration with Patient-Generated Data. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing (CSCW ’16). ACM, New York, NY, USA, 770–786. DOI: 10.1145/2818048.2819926 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [16].Cohen Julie E.. 2012. What privacy is for. Harvard Law Review 126 (2012), 1904. [Google Scholar]
- [17].Consolvo Sunny, Roessler Peter, and Shelton Brett E.. 2004. The CareNet display: lessons learned from an in home evaluation of an ambient display. In Proceedings of the 6th International Conference on Ubiquitous Computing (UbiComp ‘04). Springer, Berlin, 1–17. 10.1007/978-3-540-30119-6_1 [DOI] [Google Scholar]
- [18].Cranor Lorrie Faith and Garfinkel Simson. 2005. Security and usability: Designing secure systems that people can use. O’Reilly, Sebastopol, CA, USA. [Google Scholar]
- [19].De Choudhury Munmun and De Sushovan. 2014. Mental Health Discourse on reddit: Self-Disclosure, Social Support, and Anonymity. In Proceedings of the 8th International AAAI Conference on Weblogs and Social Media (ICWSM 2014). AAAI Press, Palo Alto, CA, USA, 71–80. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8075 [Google Scholar]
- [20].Denicoff Kirk D., Leverich Gabriele S., Nolen William A., Rush A. John, McElroy Susan L., Keck Paul E., Suppes Trisha, Altshuler Lori L., Kupka Ralph, Frye Mark A., Hatef J, Brotman MA, and Post Robert M.. 2000. Validation of the prospective NIMH-Life-Chart Method (NIMH-LCM-p) for longitudinal assessment of bipolar disorder. Psychological Medicine 30, 6 (2000), 1391–1397. DOI: 10.1017/s0033291799002810 [DOI] [PubMed] [Google Scholar]
- [21].Dinev Tamara and Hart Paul. 2006. An Extended Privacy Calculus Model for E-Commerce Transactions. Information Systems Research 17, 1 (2006), 61–80. DOI: 10.1287/isre.1060.0080 [DOI] [Google Scholar]
- [22].Dourish Paul and Anderson Ken. 2006. Collective Information Practice: Exploring Privacy and Security as Social and Cultural Phenomena. Human–Computer Interaction 21, 3 (2006), 319–342. DOI: 10.1207/s15327051hci2103_2 [DOI] [Google Scholar]
- [23].Epstein Daniel A., Borning Alan, and Fogarty James. 2013. Fine-grained Sharing of Sensed Physical Activity: A Value Sensitive Approach. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ‘13). ACM, New York, NY, USA, 489–498. DOI: 10.1145/2493432.2493433 [DOI] [Google Scholar]
- [24].Epstein Daniel A., Cordeiro Felicia, Bales Elizabeth, Fogarty James, and Munson Sean. 2014. Taming Data Complexity in Lifelogs: Exploring Visual Cuts of Personal Informatics Data. In Proceedings of the 2014 Conference on Designing Interactive Systems (DIS ‘14). ACM, New York, NY, USA, 667–676. DOI: 10.1145/2598510.2598558 [DOI] [Google Scholar]
- [25].Epstein Daniel A., Jacobson Bradley H., Bales Elizabeth, McDonald David W., and Munson Sean A.. 2015a. From “Nobody Cares” to “Way to Go!”: A Design Framework for Social Sharing in Personal Informatics. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (CSCW ’15). ACM, New York, NY, USA, 1622–1636. DOI: 10.1145/2675133.2675135 [DOI] [Google Scholar]
- [26].Epstein Daniel A., Ping An, Fogarty James, and Munson Sean A.. 2015b. A Lived Informatics Model of Personal Informatics. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’15). ACM, New York, NY, USA, 731–742. DOI: 10.1145/2750858.2804250 [DOI] [Google Scholar]
- [27].Froehlich Jon, Chen Mike Y., Consolvo Sunny, Harrison Beverly, and Landay James A.. 2007. MyExperience: A System for in Situ Tracing and Capturing of User Feedback on Mobile Phones. In Proceedings of the 5th International Conference on Mobile Systems, Applications and Services (MobiSys ’07). ACM, New York, NY, USA, 57–70. DOI: 10.1145/1247660.1247670 [DOI] [Google Scholar]
- [28].Gumperz John J.. 1982. Discourse strategies. Vol. 1. Cambridge University Press, Cambridge, UK. [Google Scholar]
- [29].Hayfield Nikki, Clarke Victoria, and Halliwell Emma. 2014. Bisexual women’s understandings of social marginalisation: ‘The heterosexuals don’t understand us but nor do the lesbians’. Feminism & Psychology 24, 3 (2014), 352–372. DOI: 10.1177/0959353514539651 [DOI] [Google Scholar]
- [30].Johnstone Barbara. 2008. Discourse Analysis (3rd ed.). Blackwell, Oxford, UK. [Google Scholar]
- [31].Joinson Adam N. and Paine Carina B.. 2009. Self-disclosure, privacy and the Internet. The Oxford Handbook of Internet Psychology 2374252 (2009), 237–252. DOI: 10.1093/oxfordhb/9780199561803.013.0016 [DOI] [Google Scholar]
- [32].Kahn David A., Sachs Gary S., Printz David J., Carpenter Daniel, Docherty John P., and Ross Ruth. 2000. Medication treatment of bipolar disorder 2000: A summary of the expert consensus guidelines. Journal of Psychiatric Practice 6, 4 (2000), 197–211. DOI: 10.1097/00131746-200007000-00004 [DOI] [PubMed] [Google Scholar]
- [33].Kang Ruogu, Brown Stephanie, and Kiesler Sara. 2013. Why Do People Seek Anonymity on the Internet?: Informing Policy and Design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 2657–2666. DOI: 10.1145/2470654.2481368 [DOI] [Google Scholar]
- [34].Kersten-van Dijk Elisabeth and Ijsselsteijn Wijnand A. 2016. Design Beyond the Numbers: Sharing, Comparing, Storytelling and the Need for a Quantified Us. Interaction Design and Architecture(s) 29 (2016), 121–135. [Google Scholar]
- [35].Khovanskaya Vera, Baumer Eric P. S., Cosley Dan, Voida Stephen, and Gay Geri. 2013. “Everybody Knows What You’re Doing”: A Critical Design Approach to Personal Informatics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’13). ACM, New York, NY, USA, 3403–3412. DOI: 10.1145/2470654.2466467 [DOI] [Google Scholar]
- [36].Kordzadeh Nima, Warren John, and Seifi Ali. 2016. Antecedents of privacy calculus components in virtual health communities. International Journal of Information Management 36, 5 (2016), 724 – 734. DOI: 10.1016/j.ijinfomgt.2016.04.015 [DOI] [Google Scholar]
- [37].Laufer Robert S. and Wolfe Maxine. 1977. Privacy as a Concept and a Social Issue: A Multidimensional Developmental Theory. Journal of Social Issues 33, 3 (1977), 22–42. DOI: 10.1111/j.1540-4560.1977.tb01880.x [DOI] [Google Scholar]
- [38].Li Ian, Dey Anind, and Forlizzi Jodi. 2010. A stage-based model of personal informatics systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, ACM, New York, NY, USA, 557–566. DOI: 10.1145/1753326.1753409 [DOI] [Google Scholar]
- [39].Lugg Catherine A.. 2016. US Public Schools and the Politics of Queer Erasure. Palgrave Macmillan, New York, NY, USA. [Google Scholar]
- [40].Madden Mary, Gilman Michele, Levy Karen, and Marwick Alice. 2017. Privacy, poverty, and big data: A matrix of vulnerabilities for poor Americans. Washington University Law Review 95 (2017), 53. https://ssrn.com/abstract=2930247 [Google Scholar]
- [41].Madejski Michelle, Johnson Maritza, and Bellovin Steven M.. 2012. A study of privacy settings errors in an online social network. In 2012 IEEE International Conference on Pervasive Computing and Communications Workshops. IEEE, Piscataway, NJ, USA, 340–345. DOI: 10.1109/PerComW.2012.6197507 [DOI] [Google Scholar]
- [42].Mankoff Jennifer, Fussell Susan R, Dillahunt Tawanna, Glaves Rachel, Grevet Catherine, Johnson Michael, Matthews Deanna, Matthews H Scott, McGuire Robert, Thompson Robert, and others. 2010. StepGreen.org: Increasing energy saving behaviors via social networks. In Fourth International AAAI Conference on Weblogs and Social Media. AAAI Press, Palo Alto, CA, USA, 106–113. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM10/paper/view/1474 [Google Scholar]
- [43].Martin Emily. 2009. Bipolar expeditions: Mania and depression in American culture. Princeton University Press, Princeton, NJ, USA. [Google Scholar]
- [44].Matthews Mark, Murnane Elizabeth L., and Snyder Jaime. 2017a. Quantifying the Changeable Self: The Role of Self-Tracking in Coming to Terms With and Managing Bipolar Disorder. Human–Computer Interaction 32, 5–6 (2017), 413–446. DOI: 10.1080/07370024.2017.1294983 [DOI] [Google Scholar]
- [45].Matthews Mark, Murnane Elizabeth L., Snyder Jaime, Guha Shion, Chang Pamara, Doherty Gavin, and Gay Geri. 2017b. The double-edged sword: A mixed methods study of the interplay between bipolar disorder and technology use. Computers in Human Behavior 75 (2017), 288–300. DOI: 10.1016/j.chb.2017.05.009 [DOI] [Google Scholar]
- [46].Munson Sean A., Cavusoglu Hasan, Frisch Larry, and Fels Sidney. 2013. Sociotechnical Challenges and Progress in Using Social Media for Health. Journal of Medical Internet Research 15, 10, Article e226 (22 October 2013), 14 pages. DOI: 10.2196/jmir.2792 [DOI] [PMC free article] [PubMed] [Google Scholar]
- [47].Murnane Elizabeth L., Cosley Dan, Chang Pamara, Guha Shion, Frank Ellen, Gay Geri, and Matthews Mark. 2016. Self-monitoring practices, attitudes, and needs of individuals with bipolar disorder: implications for the design of technologies to manage mental health. Journal of the American Medical Informatics Association 23, 3 (01 2016), 477–484. DOI: 10.1093/jamia/ocv165 [DOI] [PubMed] [Google Scholar]
- [48].Murnane Elizabeth L., Walker Tara G., Tench Beck, Voida Stephen, and Snyder Jaime. 2018. Personal informatics in interpersonal contexts: Towards the design of technology that supports the social ecologies of long-term mental health management. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 127. DOI: 10.1145/3274396 [DOI] [Google Scholar]
- [49].Newman Mark W., Lauterbach Debra, Munson Sean A., Resnick Paul, and Morris Margaret E.. 2011. It’s not that I don’t have problems, I’m just not putting them on Facebook: Challenges and opportunities in using online social networks for health. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work (CSCW ’11). ACM, New York, NY, USA, 341–350. DOI: 10.1145/1958824.1958876 [DOI] [Google Scholar]
- [50].Nissenbaum Helen. 2004. Privacy as contextual integrity. Washington Law Review 79 (2004), 119. [Google Scholar]
- [51].Nissenbaum Helen. 2009. Privacy in Context: Technology, Policy, and the Integrity of Social Life. Stanford University Press, Palo Alto, CA, USA. [Google Scholar]
- [52].O’Leary Kathleen, Bhattacharya Arpita, Munson Sean A., Wobbrock Jacob O., and Pratt Wanda. 2017. Design Opportunities for Mental Health Peer Support Technologies. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW ’17). ACM, New York, NY, USA, 1470–1484. DOI: 10.1145/2998181.2998349 [DOI] [Google Scholar]
- [53].O’Leary Kathleen, Schueller Stephen M., Wobbrock Jacob O., and Pratt Wanda. 2018. “Suddenly, We Got to Become Therapists for Each Other”: Designing Peer Support Chats for Mental Health. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 331, 14 pages. DOI: 10.1145/3173574.3173905 [DOI] [Google Scholar]
- [54].Palen Leysia and Dourish Paul. 2003. Unpacking privacy for a networked world. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’03). ACM, New York, NY, USA, 129–136. DOI: 10.1145/642611.642635 [DOI] [Google Scholar]
- [55].Patel Dilisha, Blandford Ann, Warner Mark, Shawe Jill, and Stephenson Judith. 2019. “I Feel Like Only Half a Man”: Online Forums As a Resource for Finding a “New Normal” for Men Experiencing Fertility Issues. Proc. ACMHum.-Comput. Interact 3, CSCW, Article 82 (November. 2019), 20 pages. DOI: 10.1145/3359184 [DOI] [Google Scholar]
- [56].Peddinti Sai Teja, Ross Keith W., and Cappos Justin. 2014. “On the Internet, Nobody Knows You’re a Dog”: A Twitter Case Study of Anonymity in Social Networks. In Proceedings of the Second ACM Conference on Online Social Networks (COSN ’14). ACM, New York, NY, USA, 83–94. DOI: 10.1145/2660460.2660467 [DOI] [Google Scholar]
- [57].Pierce James. 2019. Smart Home Security Cameras and Shifting Lines of Creepiness: A Design-Led Inquiry. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, Article 45, 14 pages. DOI: 10.1145/3290605.3300275 [DOI] [Google Scholar]
- [58].Pina Laura R., Sien Sang-Wha, Ward Teresa, Yip Jason C., Munson Sean A, Fogarty James, and Kientz Julie A.. 2017. From personal informatics to family informatics: Understanding family practices around health monitoring. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM, New York, NY, USA, 2300–2315. DOI: 10.1145/2998181.2998362 [DOI] [Google Scholar]
- [59].Price-Robertson Rhys, Obradovic Angela, and Morgan Brad. 2017. Relational recovery: beyond individualism in the recovery approach. Advances in Mental Health 15 (2017), 108–120. Issue 2. DOI: 10.1080/18387357.2016.1243014 [DOI] [Google Scholar]
- [60].Rapp Amon and Cena Federica. 2016. Personal informatics for everyday life: How users without prior self-tracking experience engage with personal data. International Journal of Human-Computer Studies 94 (October. 2016), 1–17. DOI: 10.1016/j.ijhcs.2016.05.006 [DOI] [Google Scholar]
- [61].Rodríguez Juana María. 2016. Queer Politics, Bisexual Erasure: Sexuality at the Nexus of Race, Gender, and Statistics. lambda nordica 21, 1–2 (2016), 169–182. https://escholarship.org/uc/item/8hv987pn [Google Scholar]
- [62].Rooksby John, Rost Mattias, Morrison Alistair, and Chalmers Matthew. 2014. Personal tracking as lived informatics. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 1163–1172. DOI: 10.1145/2556288.2557039 [DOI] [Google Scholar]
- [63].Sandhu Ravi S., Coyne Edward J., Feinstein Hal L., and Youman Charles E.. 1996. Role-Based Access Control Models. Computer 29, 2 (February. 1996), 38–47. DOI: 10.1109/2.485845 [DOI] [Google Scholar]
- [64].Sannon Shruti, Murnane Elizabeth L., Bazarova Natalya N., and Gay Geri. 2019. “I was really, really nervous posting it”: Communicating about Invisible Chronic Illnesses across Social Media Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 353. DOI: 10.1145/3290605.3300583 [DOI] [Google Scholar]
- [65].Schaub Florian, Balebako Rebecca, Durity Adam L., and Cranor Lorrie Faith. 2015a. A Design Space for Effective Privacy Notices. In Eleventh Symposium On Usable Privacy and Security (SOUPS 2015). USENIX Association, Ottawa, 1–17. https://www.usenix.org/conference/soups2015/proceedings/presentation/schaub [Google Scholar]
- [66].Schaub Florian, Könings Bastian, and Weber Michael. 2015b. Context-Adaptive Privacy: Leveraging Context Awareness to Support Privacy Decision Making. IEEE Pervasive Computing 14, 1 (January 2015), 34–43. DOI: 10.1109/MPRV.2015.5 [DOI] [Google Scholar]
- [67].Scot Jamie. 2014. A Revisionist History: How Archives are Used to Reverse the Erasure of Queer People in Contemporary History. QED: A Journal in GLBTQ Worldmaking 1, 2 (2014), 205–209. DOI: 10.14321/qed.1.2.0205 [DOI] [Google Scholar]
- [68].Shen HongHai and Dewan Prasun. 1992. Access Control for Collaborative Environments. In Proceedings of the 1992 ACM Conference on Computer-supported Cooperative Work (CSCW ’92). ACM, New York, NY, USA, 51–58. DOI: 10.1145/143457.143461 [DOI] [Google Scholar]
- [69].Snyder Jaime, Murnane Elizabeth L., Lustig Caitie, and Voida Stephen. 2019. Visually Encoding the Lived Experience of Bipolar Disorder. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 133. DOI: 10.1145/3290605.3300363 [DOI] [Google Scholar]
- [70].Suppes Trisha, Leverich Gabriele S., Keck Paul E. Jr, Nolen Willem A., Denicoff Kirk D., Altshuler Lori L., McElroy Susan L., Rush A. John, Kupka Ralph, Frye Mark A., Bickel Maia, and Post Robert M.. 2001. The Stanley Foundation Bipolar Treatment Outcome Network: II. demographics and illness characteristics of the first 261 patients. Journal of Affective Disorders 67, 1–3 (2001), 45–59. DOI: 10.1016/S0165-0327(01)00432-3 [DOI] [PubMed] [Google Scholar]
- [71].Wang Yang, Leon Pedro Giovanni, Acquisti Alessandro, Cranor Lorrie Faith, Forget Alain, and Sadeh Norman. 2014. A Field Trial of Privacy Nudges for Facebook. In Proceedings of the 32nd Annual ACM Conference on Human Factors in Computing Systems (CHI ’14). ACM, New York, NY, USA, 2367–2376. DOI: 10.1145/2556288.2557413 [DOI] [Google Scholar]
- [72].Wang Yang, Leon Pedro Giovanni, Scott Kevin, Chen Xiaoxuan, Acquisti Alessandro, and Cranor Lorrie Faith. 2013. Privacy Nudges for Social Media: An Exploratory Facebook Study. In Proceedings of the 22nd International Conference on World Wide Web (WWW ’13 Companion). ACM, New York, NY, USA, 763–770. DOI: 10.1145/2487788.2488038 [DOI] [Google Scholar]
- [73].Warner Mark, Maestre Juan F., Gibbs Jo, Chung Chia-Fang, and Blandford Ann. 2019. Signal Appropriation of Explicit HIV Status Disclosure Fields in Sex-Social Apps Used by Gay and Bisexual Men. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’19). ACM, New York, NY, USA, Article 692, 15 pages. DOI: 10.1145/3290605.3300922 [DOI] [Google Scholar]
- [74].West Peter, Van Kleek Max, Giordano Richard, Weal Mark J., and Shadbolt Nigel. 2018. Common Barriers to the Use of Patient-Generated Data Across Clinical Settings. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 484, 13 pages. DOI: 10.1145/3173574.3174058 [DOI] [Google Scholar]
- [75].Zhao Xuan, Lampe Cliff, and Ellison Nicole B. 2016. The social media ecology: User perceptions, strategies and challenges. In Proceedings of the 2016 SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 89–100. DOI: 10.1145/2858036.2858333 [DOI] [Google Scholar]