Keywords

1 Introduction

Two propositions result from an expanded systems view of cognition:

First proposition, that the externalization of cognition involves recruitment of both external technology AND external people/institutions into individual processes of “cognition.” The latter human/social externalities invite consideration of new strategies for “cognitive augmentation.” We need socio-technical augmentation for socio-technical cognition.

Second proposition, that augmentation can be viewed as part of a spectrum of “cognition support” that ranges from leveraging processes for improvement and enhancement at one end to applying de-risking (e.g., noise reduction, trusted sources of data, etc.) at the other end. In other words, “augmentation” can be achieved through both enhancement of function and also through the mitigation of the diminishment of function – both strategies yield additional cognitive resources.

Some strategies and tactics are effective all along that spectrum. To the extent of that relationship, and if the first proposition above (i.e., that we need socio-technical augmentation for socio-technical cognition) is tractable, then future socio-technical forms of cognitive augmentation might be derived from existing strategies of socio-technical behavioral de-risking that have applied historically to the behaviors of people and institutions. Those de-risking strategies emerge in myriad business, legal, technical and social (BLTS) realms. Existing strategies for supporting reliable socio-technical systems can help to both augment and de-risk socio-technical cognition.

1.1 Introduction of Analysis

The notions of cognition have always reflected a combination of individual and social elements. As the digital revolution has enhanced the ability to capture and measure ever-more-granular attributes of interactions, the scales have continued to tilt toward perceiving cognition as migrating from what was an intimate and individual process toward a more complex system of “socio-technical” processes. As our understanding of cognition changes, it invites modification in our strategies and operations aimed at augmenting cognition.

As our ability to clarify the concept and locus of “cognition” continues to migrate from individual human brains onto complex hybrid socio-technical systems, our strategies and tactics for augmenting such expanded “cognition” will also change. Future augmentations of cognition will be most effective if designed, developed and deployed with intentionality and awareness of the systemic nature and mechanisms of post-Internet networked cognition.

We suggest that “augmentation” and “de-risking” of systems share a common root of performance reliability at a given level of performance. From that starting point, we conclude that future strategies of cognitive augmentation can be gainfully informed by historical models of socio-technical behavioral de-risking. This statement is supported by an expanded view of cognition, which sees “situated cognition” as being possible in both external inanimate objects AND also in other people and in external institutional interactions. Applying this approach this paper also seeks to anticipate some of the potential future vectors of augmentation of these expanded systems with reference to historical structures of de-risking performance and behaviors of existing socio-technical systems [1].

1.2 Where Does Cognition Take Place?

Thought and cognition have traditionally been understood to reside in the brain of individual humans, and many long-standing and foundational rules, laws, and customs (such as culpability for actions under law) are based on that understanding. Increasingly, however, there is evidence that cognition and consciousness is more subtle, and extended, than that conception. Cognition is amenable to “systems” analysis [2].

Cognition is increasingly recognized, and fruitfully analyzed, as taking place within and outside human brains in systems of scaled perception and meaning making. In fact, in the last several decades, the concept of “cognition” has migrated into the external world, where it is increasingly viewed as being “situated” and/or “embodied” in external inert physicality. From this perspective, the brain is a virtual “antenna” that is tuned by formal learning and informal experience into language and culture that foster the interactions from which the “mind” emerges.

What are the possible analytical and operational insights that might be derived if the processes of cognition are not just informed by physical environments (e.g., in perceptual consciousness), but also take place in those environments? As an initial matter, consider that even those who embrace the notion that cognition takes place in the brain recognize that those patterns of cognition are formed from external inputs. Education, indoctrination, training, etc. are examples of the formalization and institutionalization of that externality. There are also myriad informal external sources of cognitive patterning, for example learned social norms, touching a hot stove, etc.

Consider, for example, an adopted infant from one culture who is raised in a foreign culture – the grown-up adult’s language, cultural preferences, attitudes and cognitive patterns will be a product entirely of the adoptive land. In fact, it is relevant that from this perspective, education and social learning are forms of “augmentation” of cognition, although the augmentations precede the acts of cognition in preparing individual infant “minds” to apply patterns of meaning to later perceptions and cognition. Education is cognitive augmentation.

That point may initially seem rhetorical and academic, but it forms the basis for our later assertion that standard education and training in standard policies and rules is a form of “meaning security” that is necessary for the sustainable reliability of information networks. Since situated cognition of all sorts takes place on those networks, future strategies of “augmented cognition” will include heavy measures of education (and re-education) into rules and norms from which reliable socio-technical systems are built. There will be policy-based constraints on individual liberty associated with these standards, just as red lights and shoplifting rules constrain liberty at present. These forms of augmentations will have costs like those that arise in the social contract or the “golden rule” – both of which constrain liberties in exchange for enhanced de-risking and leverage.

2 Part 1 – The Wandering Concept of “Cognition’

2.1 Cognition in the Brain – Augmentation in the External Environment

Cognition is commonly thought to reside in the mind which in turn resides in the brain. In the 1960’s, notions of augmented cognition started from the presumption that cognition is a process that takes place in the human brain. In this early paradigm, augmented cognition is typically understood to refer to augmentation of human (brain) cognition by external technologies.

For example, Douglas Engelbart defined augmented human intelligence in similar terms as “increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems [3].” From the beginning, the source of augmentation was considered an externality to human cognition. In fact, DARPAs augmented intelligence program originally anticipated human machine dyad pairs [4].

It is notable, however, that these earlier conceptions were based on some significant assumptions. The Oxford English dictionary defines “cognition” as: “The action or faculty of knowing, knowledge, consciousness; acquaintance with a subject [5].” The OED defines “Augment” as: “To make greater in size, number, amount, degree, etc. To increase, enlarge extend [6].” Notably, neither term is bound to a specific system, and that contextual ambiguity is typically tolerable in casual conversation about augmented cognition; however, where the intention is to analyze and enhance specific systems of “augmented cognition” for future applications, disambiguation is a necessary first step.

2.2 Extended Minds –Social Cognition

In the 1960’s, sociologists such as Erving Goffman explored the relationship of the “self” to interactions, from both expressive and perceptual angles [7]. The result was a recursive house of mirrors of meaning, with social and technical elements intermixing to inform frameworks of cognition. Kuhn observed other social components to individual cognitive function [8]. In both of these cases, the individual’s cognition and consciousness (sense of self) are observed to be rectified and valorized by interactions with other individuals (rather than inert objects) in social interactions.

In both of these analyses, there is no suggestion that cognition resides outside of the mind of the individual, but rather that the individual’s cognitive patterns (existential and paradigmatic) are heavily influenced by people and institutions in the external environment. This has direct impact on the search for future strategies for augmenting cognition. We can best augment them if we first know what they are.

2.3 Extended Minds –Situated Cognition in Inert Physicality

Various theories of situated cognition and embodied cognition embrace the extension of the concept of cognition to external environmental elements.Footnote 1 Unlike traditional notions of “augmentation,” these non-brain sources of cognitive support are not seen as mere external sources of “augmentation,” but rather as foundational elements of the act of cognition itself.

At some level, this is an issue of semantics; however, it is not a mere academic exercise because a more holistic view of the apparatus of cognition (brain, language, culture, rules, other brains, etc.) combined with consideration of the information processes that are advanced through cognition/intelligence (e.g., data plus meaning equals information) suggests increased solution phase space for nagging challenges of de-risking broadly distributed and scaled information networks. We cannot fully and effectively augment cognition if we misapprehend where and how it occurs.

2.4 Socio-Technical Solutions for Socio-Technical Problems

The heterogeneous nature of augmentation in socio-technical systems is such that it introduces new classes of variables into the analysis of augmentation and performance integrity of cognitive systems. In fact, this article will observe that cognition results from hybrid socio-technical systems that are composed of humans in their respective environments and myriad inter-dependent elements from business, legal, technical and social (BLTS) domains.

By parsing the concepts, we can expose additional degrees of freedom in networked information/cognition systems, enabling more highly refined development of such systems, with downstream benefits of enhanced reliability and predictability of experience for information seekers and data subjects alike. In fact, the advances in reliability and predictability of integrity of information systems due to advances in “meaning integrity” will help dissipate many of the current concerns voiced under the general categories of “security,” “privacy,” and “liability mitigation” online. This is because those concerns are just symptoms of underdeveloped integrity in information channels, all of which can be improved with enhanced “meaning” systems. This relationship of augmentation and de-risking is further elaborated in the section on meaning integrity below.

2.5 What Insights Are Derived from These Alternative Conceptions of Cognition?

The concept of “cognition” has migrated from the brain to physical externality. Concepts such as “social cognition,” “embedded cognition,” “morphological cognition/computing,” “situated cognition,” etc. all share the characteristic of analyzing the systems (including systems of systems) that can be said to be involved in “cognition.”

The language and the concepts of these extended mind paradigms are fluid and frequently overlap due to the intrinsically ambiguous nature of the concepts of cognition and consciousness. While some part of these notions may initially seem like exercises in semantics,i brief reflection reveals that the parsing of these concepts can also lead to additional potential phase space for cognitive augmentation solutions.

As an initial matter, consider that their shared recognition of the relevance (and even necessity) of the external environment in the processes of cognition invites consideration of what additional strategies of “augmentation” associated with such external environments might be explored in these forms of extended minds and their extended systems of cognition. The question takes on added importance with the rise of socio-technical systems, where hybrid influences of three types of entities – people, institutions and things on cognition. What analytical construction could help tame the complexity of these myriad inputs?

3 Part 2 - The “Cognitive Unit” as Reaction Vessel for Converting Data into Information by Adding Meaning

3.1 What Is a “Cognitive Unit?”

In this paper we assert that - Information can be produced, and cognition can take place, in any system, at any scale, in which meaning is applied to data to yield information. We will refer to these virtual information reaction vessels as “cognitive units.” Following this definition, systems of “augmentation” of these cognitive units are those which can increase the information yield of a given set of data and/or perceptions that are introduced to those cognitive units.

That sounds very theoretical, but application of a simple algorithm can help to make it operational: Data plus meaning equal information (consistent with Claude Shannon [9]). Therefore, any structure of meaning that can yield more information from a given set of data offers a potential indication of the presence of a cognitive unit. In turn, the identification of the attributes of that “cognitive unit” help to reveal paths to its cognitive augmentations.

3.2 Socio-Technical Systems that Are Cognitive Units Can Leverage Data to Enhance Information Yields

A car and a driver are together an example of a socio-technical system. Even a perfectly tuned car cannot de-risk a reckless (or intentionally criminal) driver. Where the driver and the car both perform in accordance with expectations (set by technical specification for the car and rules and norms for the driver), the socio-technical system is de-risked.

Networked information systems are also socio-technical systems. Even a perfectly engineered Internet cannot de-risk a reckless (or intentionally criminal) online user. Most cybersecurity work has been focused on engineering the technology and protecting the data as a way of de-risking the internet. That strategy is necessary but insufficient to achieve de-risking. In fact, the term “users” is misleading in the case of socio-technical systems such as the Internet, because people and institutions are not just external users of a tool, but themselves critical components of the sociotechnical information system of the Internet. Markets are another example. The internet (and markets) would be inert if there were no users (participants). They are both examples of socio-technical systems.

Just as millions of untrained and unaware drivers would cause roadways to be so dangerous as to be un-usable, so too are information networks negatively affected in the absence of rules. The car and driver should also be treated as a unit in certain levels of risk mitigation analysis.

3.2.1 How Does the Analytical Construction of a Socio-Technical System Aid in the Identification of a Cognitive Unit?

Narratives and paradigms are transferred between and among people and institutions in socio-technical structures. Examples are students learning in schools, people behaving in conformity with laws, employees following company policies, companies following supply chain contracts and industry standards, etc. By spreading “meaning” these organizations “de-risk” future interactions at cumulatively massive scales.

Humans and human institutions regularly rely on these networks to situate their cognition, with the result that the augmentations of such situated cognition already involve systems outside the human brain. Those mental systems that are outside the human rain invite consideration of separate augmentation pathways that more traditional “augmented” cognition.

What is new is the simultaneous mutual situated cognition of information reaction vessels in one another. These are the mechanisms that dynamically create and perpetuate meaning, much like the waves of synaptic activity (not the synapses themselves) are thought to give rise to traditional cognition in the human brain. In sociotechnical cognitive systems, many of the components are not inert embodiments of technology, but rather interacting meaning reactors. Data flows through and among these reactors, feeding the meaning mechanisms. There are many meaning mechanisms around us. Brains/minds, institutions, markets. AI is an emerging meaning mechanism – hence its existential threat to existing meaning making systems of humans and institutions.

This article uses the concept of augmented cognition (frequently described as external technical boosts for activities in the human brain) as a starting point to suggest that a fruitful line of analysis can be supported by revisiting the notions of both augmentation and cognition with a more systems-oriented approach. This approach is supported by many lines of inquiry over thousands of years but has been ignored under the paradigm of individual as the thinking unit. Bottom line is that cognition might be said to reside in those systems that apply meaning to data thereby creating “information”. These are the reaction vessels of information and therefore cognition.

To de-risk a socio-technical system, the hybrid behaviors of the social and technical elements must be made more reliable and predictable. Many of those same strategies might prove useful in augmenting the forms of cognition that take place in those socio-technical cognitive units.

Consider, for example, that in both cases, the quest for reliability and predictability can be seen to simultaneously de-risk and leverage (augment) the processes of those socio-technical systems. To the extent that “cognition” is viewed as taking place in these systems, that cognition can be said to be enhanced by these strategies.

Security from Reliability

Security is performance or operation in accordance with expectations. From this definition, security (and privacy and liability) can be pursued through the path of reliability and predictability. The reliability of socio-technical systems requires reliability of technology and reliability of people/institutions. Technology is made reliable by conformity to specifications. People/institutions are made reliable by conformity to rules and laws.

What is needed for people and institutional reliability? Not specs – but rules, incentives and penalties. This applies to any entity with discretion, whether human or organizational. Since, their “reliability” is less predictable as a result of their potential for the exercise of discretion, need to have penalties, and incentives to draw that discretion in a more reliable (and secure and private) direction.

4 The Missing Piece – Meaning Standards for Cognitive Integrity and SI System Integrity

4.1 Data Interoperability Versus Meaning Interoperability

If these mechanisms of supporting technical and data interactions are broadly present, why aren’t the resulting information systems more interoperable? Data flows across technical systems are largely interoperable, since data, in its purest form is relatively inert, and data and technical interoperability have the advantage of being based on the laws of physics which are, for all present purposes, uniform across the globe.

4.2 Meaning Security” Has Yet to Be Explicitly Recognized and Developed

HIPAA, GLB, GDPR apply “data” security to the challenge of information security. That is necessary, but insufficient for information security. Data plus meaning equals information. So, information security requires both data security AND meaning security; however, the term “meaning security” is unfamiliar. In addition, what does it have to do with augmented cognition?

This paper asserts that the challenge of the current and future period is no longer data interoperability, but “meaning” interoperability. To accomplish “information security” a system must apply strategies for both “data security” AND “meaning security.” The ultimate failing of HIPAA, GLB, GDPR and other Fair Information Practice Principles (FIPPs)- based “privacy” rules is that they emphasize “data” security, ignore “meaning security,” and treat all data as equally information laden (i.e, equally “surprising” in Shannon terms). This relieves plaintiffs of proving harm. Since data is dual use technology, this is too blunt an instrument.

4.3 Secrecy Is Dead – Manage Risk (and Augment Cognition) with “Meaning” Reliability/Security Not Data Security

The foregoing should not be interpreted as advocacy for the death of security [10]. Instead, it is based on a realistic assessment of the realities of seeking to maintain security controls over exponentially increasing interaction systems (and the exponentially increasing risks that they create) [11]. From this perspective, data security is a losing battle [12]. It is still important to raise the costs of unauthorized access (and the consequent dissipation of information arbitrage), but is ultimately doomed to fail. That failure is caused by the billions of people and their trillions of “thumb swipes” performed every day. Those swipes reflect information seeking behavior. The collective pressure of that behavior will ultimately doom efforts to provide data that feeds the insights being sought.

4.4 Augmented Cognition and de-Risking Are Linked Because the Both Depend on the Presence of Meaning to Convert Data into Useful Information

Data without meaning cannot inform a party. If the party is not informed, cognition cannot be said to have occurred (or been augmented!), future risks cannot be avoided.

4.5 Pathways to Meaning Augmentation/Security

The meaning making mechanisms of individuals are informed by education, narratives, priors, etc. The meaning making mechanisms of institutions (business, government, civil society) are more specifically programmed – set forth in foundational documents and regulations such as articles, bylaws, contracts, constitutions, etc. Shared meaning across a population enhances the likelihood that the population will be similarly informed by a given set of data/perceptions. Similar information supports similar behavioral responses, i.e., red means stop ample. The light does not stop traffic, it is the agreement that red signals the engagement of stopping behavior that stops traffic.

4.6 Augmentation and de-Risking Are Linked Because Both Preserve Cognitive Resources

The link is made clearer through recognition that security, privacy and liability mitigation are all symptoms of the underlying illness of a lack of “integrity” (as variously measured) of the input and output channels of information that form the myriad feedback loops (operating at multiple levels) from which cognition emerges. As cognition migrates from the brain to hybrid socio-technical systems, the connections of these channels become more extended and thereby more vulnerable to intrusions on integrity. New strategies that augment cognition will include those that can mitigate the threats and vulnerabilities of these extended channels, and thereby decrease the resources (including cognitive load) that are associated with maintaining the reliability of those channels. Those resources can then be re-directed toward more directly cognitive tasks. This is roughly akin to trying to write a novel in a quiet room versus a noisy room.

5 Conclusions

In this paper, we observe that prior notions of cognition and therefore its augmentation, should be revisited to account for the emergence of broadly scaled “situated cognition” in online information networks, and the increasing hybridization of socio-technical information networks. We also propose that a variety of new vectors are made available for augmentation as cognition migrates (or is outsourced?) to massively networked and distributed socio-technical systems, where we each are integrated together with our institutions as users, and also “augmenters” of an emerging “cognitive commons.”

6 Recommendations

The interaction trends in these disintermediated, distributed information networks has challenged traditional notions of cognition as a social phenomenon, and also institutional meaning making mechanisms. This undermines institutional power by moving traditional communications to channels that are not subject to normal regulatory channels. We need to do a better job of rendering the humans in the system more reliable. This is a case for developing compensating controls that will meet the security objectives of the system that remain unmet by existing tools. This includes the business and legal constraints (rules) that will de-risk systems from a cybersecurity perspective.

7 Future Work

It is the intent of the authors to pursue research into these alternate controls, discover patterns of behavior and activity that will render the human aspects of systems more reliable and evolve standards for these practices to disseminate to others.