Keywords

1 Introduction

The use of brain-computer interfaces (BCIs) has seen a lot of success during recent years [1,2,3,4,5,6,7,8]. It was mainly in the medical area where BCIs have proven to solve many problems and help many patients lead an increasingly decent life. As assistive technology, BCIs enable people to communicate (e.g. for ALS patients), to move (e.g. neuroprothetics) and to direct external devices such as wheelchairs [9,10,11]. Questions of feasibility form the cornerstone of BCI research in this domain, and ethical questions have been addressed based both on the current technological status quo and a projection into the future of how BCIs might look like [12,13,14,15,16,17]. Beyond the medical context, BCIs are currently used for entertainment where issues of enabling better usability and gaming experience form the major line of research [18, 19]. The automotive sector is one further area where BCIs can serve important goals such as supervising and directing a driver’s performance, extending beyond the automobile to apply to many more contexts where attention and vigilance is important [20,21,22,23,24,25,26]. In addition, the military has also developed BCI technologies that serve, among others, to enable “silent talk” between soldiers in the field [27]. Another area has attracted increased attention is the industrial sector where BCIs can support human workers in enabling them to handle an external device (such as a robot arm or a vehicle) with the help of a BCI [28,29,30].

Given the rapid development of BCI technology and its impact on individual and social life, a reflective view is thoroughly needed. As is the case with all forms of (technological) progress, it is reasonable to assume that the consequences of an increasing use of BCIs will have both positive and negative aspects. To design technology to make human lives better, normative reflection about these aspects is crucial. Furthermore, technological progress is not determined and unchangeable. Rather, it follows and is dependent on human decisions and agency. Consequently, an ethical view on technology is both needed and feasible.

However, there are some problems with such a view. First, reflective views are typically too late since new technologies emerge more rapidly than any normative assessment can do. Second, the normative world is characterized by pluralism. That means, no one has ever succeeded in finding the “right ethics” for questions not just surrounding technology, but regarding any aspect of human activity. Whether pluralism as an ethical view is finally correct or not is hotly debated [31, 32], and it might turn out that in fact monism is true, i.e. the view that there exists some single true morality. But we are far from reaching it, and given the complexities of human thinking and judging, is seems reasonable to assume that we must live with pluralism indefinitely. Third, not only pluralism, but the complexity of modern societies and technology itself makes it hard or even impossible to identify responsibility for the goods and bads that arise in or out of the use of technology. This problem is intensified by autonomous technology that has the capability to “act” independently of human supervision and action. Artificial intelligence that can and will be found in many everyday applications, that allow us to have driverless cars or workerless factories, for example, shows how hard it is to find those who are responsible for ethical failures, misuse or accidents. This is particularly the case when a problem scales up due to the interconnectedness of these technologies. In technology, as in economics, for example, the systems are often too complex to reliably find knots of responsibility, agents that we can attribute with knowledge, causal impact, intentions and so on, all of which is necessary to hold someone responsible. Consider driverless cars to see how the question of responsibility is at the core of current ethical and political debates [33].

The current practice in the field of technology ethics often proceeds by merely listing the ethical challenges in a broad way, without putting much effort into thinking about how exactly “the good” can be brought about. Also, it is often characterized by a lack of being close to the phenomena. This means, for example, that the various contexts in which BCI technology, for example, are used, are insufficiently taken into account when it comes to assigning responsibility. Moreover, technology that acts at least partly autonomously requires a different evaluation than technology that is not. When an approach is chosen that sees the implementation of values as residing in the process of technology development itself, such as value-by-design, the broader implications of making ethics effective, such as pluralism or regulatory issues, are rarely addressed. Finally, scholars regularly try to find facts about what it means to be responsible, descriptive facts as well as normative ones, and to propose legal regulation to counteract problematic consequences and avoid unwanted implications of new technology.

But, as [34] has shown in another context, this approach faces problems. One of these problems is that the descriptive and normative criteria and facts about responsibility are not simply “there”, waiting to be found by a thorough philosophical analysis. Rather, these factors are flexible and open to being assigned after a social process of finding (temporal) agreement. In addition, the traditional way often does not sufficiently address questions of (artificial) agency and the question of who is the agent in what way, when it comes to determining responsibility.

Still, however, finding the right addressees to go to for making technological progress ethical is crucial. We cannot simply list all the ethical issues and either leave the solution up to others, or avoid thinking about how ethical thinking can be made effective, because this would make ethical reasoning rather superfluous. Given the need and feasibility of normatively assessing new technologies on the one hand, and the problems with traditional approaches in ethical and legal scholarship on the other hand, a new approach is proposed. To accomplish this, the paper pursues two guiding questions: (1) how should we understand the ethical questions surrounding the development and use of BCIs in a way that is both technologically adequate and close to the phenomena?; and (2) what means are there to push technology development in the right direction, i.e. the direction that reasonable ethical reflection reveals as pressing?

In what follows I will first introduce a distinction aimed at helping ethical evaluations of technology to get a grip on the phenomena they deal with (Sect. 2). The result is a better view on the phenomena at hand, enabling a more precise assessment based on a particular distinction between two forms of agency. I will briefly apply it to the ethics of BCIs.

Second, I will broaden the perspective and ask how the ethical consideration can be applied in the real world (Sect. 3). The main emphasis will lie on outlining a philosophical move from ethics to politics, that is, the proposal that ethical questions are better solved by having the right political institutions, rather than having the right ethical point of view.

2 Establishing an Ethics Matrix to Assess BCIs

2.1 Primary and Secondary Agency

According to almost all philosophical theories, ethics is about agency [35]. An ethical evaluation, proceeding under the fundamental distinction between good and bad, or right and wrong, applies crucially to acts of agents. Only if something can be classified as an action, committed by an agent, can it be reasonably scrutinized. Particularly for questions of responsibility agency is crucial, since we ascribe responsibility primarily to agents. This also holds true for larger questions as they are typically dealt with in political ethics, social ethics and the like, in which case it is particularly important to know (and often not directly picked out as a central issue) who the agent in question is. However, it is also true in the ethics of technology where the question about agency is no less complex than in the case of political or social ethics.

This is so because, first, much like in the latter domains where we talk about political communities, the state and other agents, in the development and production of technology we often also find collective agents such as corporations or the scientific community to which we apply our ethical reasoning. However, the actions of these collectives depend on individual actions and their interrelations. Just as it is not the government per se that acts, but individual politicians, judges or public officials, it is a plurality of agents within technology development that are responsible for the results. Second, the products of technological progress themselves, i.e. technological devices, exhibit what we might reasonably call agency. What is meant by ascribing agency to technological artefacts is what currently drives, among others, the development of artificial intelligence, the relative autonomy and automatization of technologically-initiated or – mediated processes. For example, if a car can drive without a human driver, it exhibits at least some degree of agency. Similarly, a BCI used as assistive device can also exhibit agency to a certain extent. This is the case, as will be discussed below, when a brain-input is used to automatically direct or set in motion other process, e.g. in a car.

There are many accounts of agency, prominent in disciplines such as philosophy, psychology or cognitive science [36]. According to the received opinion (and one that the proposed concept will be based upon), agency consists of intentional action as a special sort of behavior, i.e. behavior that is accompanied and/or caused by certain mental states: desires, beliefs and intentions. They are followed by carrying out the intention through a physical movement. This includes also non-movements, as in the case where one refrains from doing something, based on a desire, a belief and an intention. Thus understood, an action is distinguished from mere reactive behavior, but it includes habitual and complex actions, where there is not a desire, belief and intention for each part of the action.

There is anything but agreement on many questions surrounding the notion of an action [37]. For example, it is not clear how actions can be individuated, how collective action is feasible, or how exactly basic and more complex actions hang together. Moreover, an action’s mental parts that are thought to cause it, or that are at least presupposed when identifying an action, are not unanimously understood. Whether talk of intentions, for example, refers to mental states or rather to reasons for actions, is debated. Finally, whether and how our everyday talk about agency and agents applies to artificial agents and if machines can properly be called agents, beyond the technological feasibility to implement agency-like capabilities, is also up to be debated.

So, both the plurality of agents (and subsequently of the various types of activities that need to be scrutinized ethically) and multiple forms of agency call for a systematic approach in order to get a full view of ethical issues and ways to solve them. The promise of this is that we get a clearer picture of what is at stake and how we need to act to address the ethical issues of technological progress, i.e. whom we need to address and which areas we need to look at in our attempt to develop “good technology”. To accomplish this, I propose an alternative view about agency. In it, the major distinction is not between agents and non-agents, or between human and artificial agents, but between primary and secondary agents. The criterion for distinguishing these two concept is (1) being able to implement and control action, and (2) being able to take over steps of complex actions.

To illustrate the point, consider the following examples:

(Production) Ed works at RainDrops, a company that produces drip moldings. He assembles moldings of diverse length and robustness into the final products. Because of the physical shape of the raw material, the moldings, Ed’s work is very hard. Therefore he uses a robotic arm to support him. This external device is directed through a BCI with which Ed starts a process whereby the robotic arm delivers moldings with a particular length and size to Ed. The robot’s activity is pre-programmed, so that Ed simply gives the signal to start the series of actions the robot has to accomplish.

(Automobile) Anthony drives in his new car that is equipped with a BCI device. It continually surveilles Anthony’s mental states, particularly his affective states. The roads Anthony uses to get to his office are usually very crowded. This causes Anthony to be stressed, he feels annoyed about the other cars’ drivers. At one point, he wants to overtake another car and therefore speeds up. However, the BCI prevents Anthony from speeding up and overtaking the other car, because it is programmed to do so when it detects signs of stress in Anthony.

In both Production and Automobile, we face what can be called a primary agent. Whereas in Production, it is clearly Ed who is the agent, using as an assistive device the BCI-controlled robot, in Automobile we see that it is the car’s BCI that controls the situation. In at least one case, however, we also find a secondary agent. Consider Anthony, the driver. Anthony is in possession of the ability and the resources to drive his car, and he is free to do so in a way that pleases him, but only to a certain extent. His mental states are causally efficacious up to the point where he intends to do something potentially harmful. The reduction of intentional efficacy is what makes him a secondary agent, as compared to the BCI in his car.

Both the primary and the secondary agents can, but need not exhibit what action theory holds as essential prerequisites for action, namely beliefs, desires and intentions. However, only the primary agent’s mental states or programs are fully efficacious, which means that only Ed and the BCI in Automobile can to force courses of action upon others, i.e. fulfull criterion (1) above. So, the reach of a primary agent’s mental states is such that they not only cover a specific action, i.e. the one that the agent sets out to do. What adds to this is the ability to be authoritative in providing constraints or even whole courses of self-chosen actions for others. The BCI is, in this sense, authoritative in that it allows Anthony to act freely in a more or less clearly demarcated frame. It has “the last word” on what actions can or cannot be taken, even though it is preprogrammed to detect the limits of this frame (i.e. Anthony’s mental states that trigger the prohibition of taking over, for example). This is the case even though the process upon which the BCI operates is automatized and does not itself imply freedom of action. Similarly, Ed provides the frame of action for the robot since he authorizes the robot’s movements to start or stop.

Moreover, a primary agent is characterized by the ability to independently take over the steps of a (complex) action, that is, it fulfills criterion (2) above. Production and Automobile both show that the primary agent, as compared to the secondary agent, acts independently from the authority or permission of others. It is true that the BCI remains inactive until it detects stress and Anthony attempts to overtake another car. Therefore, it is dependent on another event or agent. But this is not to say that its action rests on another agent’s authorization. It is caused by a trigger, but this is hardly comparable to when Ed authorizes the robot to start its movement. All human action is triggered by something else, at least most of the time, but we would not take this fact to say that humans are not agents who take over steps of a complex action. This, again, shows how, for primary agents, the reach of their mental states is larger than for secondary agents. The former have the capacity to act according to a variety of plans (as a variety of combinations of beliefs, desires and intentions), and they adapt their options to the environment as a reaction to what they perceive, whereas the latter are confined to a specific course of action that is dependent on the permission or inhibition of another (primary) agent.

These examples show that the distinction between primary and secondary agency is not identical with the one between a human (or a full) and an artificial agent. It is not relevant whether the primary agent has freedom of action or is self-aware and has consciousness, for example. What counts is its ability to be authoritative in forcing frames of action upon others and to initiate courses of actions.

Furthermore, it is also compatible with other agents being responsible for the kind of (automated) activity of an artificial agent, even of an artificial primary agent. That is, we can still say that programmers or producers are responsible for a BCI’s activity, reliability etc., although the agent is a primary one. Being dependent on another event or agent (as in Automobile) does not exclude primary agency, nor does having been pre-programmed (or educated) by another party. This is because what exclusively counts for primary agency is the authority to force frames of possible actions on others. This authority can be caused and supervised by other agents, even pre-programmed, without the primary agency to cease. Consequently, we can assign responsibility to third parties without having to remove primary agency. The pressing ethical question, then, is: when is implementing or installing authority permissible or good. More on this below.

Note also that one and the same entity can be both a primary and secondary agent in various contexts, depending on what role it has. Consider the BCI in Automobile where it is the primary agent. But imagine also another scenario where, although it has the same capabilities, the BCI is used as a device to control the entertainment system: here the BCI is a secondary agent. Moreover, both sorts of agencies can exist simultaneously.

What use is there for distinguishing primary and secondary agency? First, we can assign responsibility in a much more nuanced way than had we merely one sort of agents. For instance, a self-driving car as a primary agent is certainly more of a primary agent-kind than, say, a car that is only partly automatized. Still, parts of the secondary agent can be, in their specific context, a primary agent as well, and we have seen that primary agency still allows for third-party responsibility. Second, and more importantly, distinguishing primary and secondary agents enables us to groups various ethical questions along the idea of agency and thus establish an ethical matrix. This in turn enables us to find the right person, institution or any other entity to which we assign responsibility and where action needs to be taken to account for the ethical questions we pose. To illustrate and explain this point, let us now outline an ethical matrix for the use of BCIs, grouped according to the idea of agency. Before doing that, however, we need to consider modes of acting as part of the matrix.

2.2 Modes of Acting

Human (and for that matter, technological) actions are plural. We carry out a huge number of actions on a daily basis, and it is a philosophical question itself whether a single physical movement represents one action or rather a number of different actions [38]. Still, we can simplify the issue and hold that, broadly speaking, actions fall into one of two categories. The first is the category of initiate, allow, inhibit. Under this rubric we can find actions where we think the agent does something to start off an activity and to bring about an effect, or to let something else happen. Here, the agent either refrains from interfering with a process that has already started, or he refrains from refraining, when an agent stops another process. The second category is that of mediating between two processes. This takes place when an agent causes some process to go into a desired direction.

The former type of action consists of an agent who has authority to cause a course of action to begin or end. Moreover, there are processes involved that can, but need not, be correlated or hang together in some sense, other than one action being the cause of another one. Consider Automobile again: the BCI that allows or prohibits Anthony to overtake another car exemplifies an action of the initiate, allow, inhibit-category since the computer’s task is to not allow another action (the overtaking) to take place. The actions involved are not correlated other than that the BCI’s intervention causes another event (not) to take place.

Actions in the mediating-category, on the contrary, relate two other actions with each other in a way that exploits, uses, or establishes a connection between them. Consider the following example:

(Research) Automobile giant BAW (Bajuwarische Autowerke) is currently working on autonomously and semi-autonomously driving cars. The trend towards automatization does not stop in front of the automobile industry. On the contrary: driving is one of the major fields of research and progress in the automatization process. However, there are many open questions regarding the design of these cars and other features that determine the popularity and acceptance of autonomous cars. To gain knowledge about these questions, BAW uses the whole range of marketing instruments such as surveys and analyses. Another instrument is the use of BCIs to see how people react to different features of a car. Therefore, when subjects test a car, BAW plugs them to a BCI, modifies the car’s features and measures the subject’s reaction to this in real-time. With the help of a BCI it is possible to automatize the research process and to gain fine-grained and ecologically valid data that help shape the final product.

What Research shows is how an agent – the BCI – can act to mediate between the interests and actions of various agents – the BAW company and (future) customers. It acts in a somewhat autonomous manner. But in its actions, it just measures what happens in the subject’s brains, subsequently changes the car’s features and thereby collects data about people’s preferences regarding the car. The company’s and people’s actions (producing and consuming) have an internal relation with each other: the production needs to match the preferences so that the company does earn money, and people can happily fulfill their desires.

The same can be said of examples where technology is the primary agent in accomplishing mediating tasks:

(Entertain) Babette starts her smart TV because she wants to see the latest episode of their favorite series “The Walking Dead”. The TV device has a menu with several elements such as Photo Library, Web Browser, and Apps from services such as Amazon and Netflix. A BCI helps her find the right item in the menu with a remote control. She moves the cursor towards the streaming app and clicks through the various items in the menu. Her brain reacts accordingly and when positive signals are detected the TV get the information to continue in the menu. When negative signals are detected, the TV goes one step back in the hierarchy of the menu.

We might say that in Research and Entertain, the BCI is used in an instrumental way, and that this use differs from it acting more autonomously in Production and Automobile. This is true, but only as far as it goes. For, first, in Research the BCI is also acting autonomously, it adapts to the people’s reaction towards the car and modifies the car’s features accordingly. Second, in Production and Automobile we might also say that the BCI is used as an instrument by human agents. For example, Anthony, the driver, uses the BCI not directly as an instrument, but indirectly, by buying and using a BCI-controlled car as such, not in every case of driving to the office where it operates autonomously.

It is worth noting the advantages of the approach pursued here: If one speaks of automatization in a general way, and of technology as being instrumental for human interests in a similarly general way, one misses important distinctions between the various uses of technology. It poses a normatively relevant difference if the BCI is used in Research or in Automobile, although both uses are instrumental and simultaneously exemplifying autonomously acting technology. And different agents must be addressed to solve the ethical issues. So being able to use a better distinction according to which uses of technology is distinguished according to the criteria primary/secondary agency and mode of agency allows for a more fine-grained ethical analysis of the question at stake. Moreover, this distinction might be helpful in identifying or ascribing responsibility for the ethically relevant implications and consequences.

2.3 Ethical Issues Related to BCIs

From what has been said so far, a matrix of ethical issues emerges. The two dimensions according to which the matrix is organized are the criteria “primary/secondary agency” and “mode of agency”:

When BCIs are used as primary agents, as in Automobile, the important question relates to issues of autonomy, risk and privacy. Autonomy is addressed through a BCI’s capability of implementing and controlling actions. Since primary agents have, by definition, the last word on some decisions, one might question whether they have the right form of authority. In other words, the primary question is whether it is allowed, permissible or good to create the authority represented by the BCI. What are the proper rules guiding this question? And where can we look to find an answer? The ethical matrix not only allows us to answer these question by looking at various proposals in different areas of applied ethics (business ethics, medical ethics), but also to find solutions in political philosophy where the issue of authority has been extensively studied [39]. This stands in contrast to many debates in applied ethics and public policy that merely look at autonomy per se, without considering the processes that lead to accepting forms of authority (Table 1).

Table 1. Ethical matrix

Other areas of inquiry are not in fact specifically tied to a BCI, but they obtain another status when they are seen under the perspective of authority, particularly authority represented by a technological device. If a BCI reduces my freedom to overtake other cars on my way to the office, I am indeed no longer the one who is in charge. But does this reduce my autonomy? It does not, of course, if I agreed to using it. It does, for example, when BCIs in cars are mandatory. This, however, is another question, namely one about the proper way to regulate risks by reducing freedom of action. Again, the question of political authority emerges, because the question is what the processes are by which we organize the regulatory business. New fields of inquiry can and must be addressed, such as business ethics or political philosophy. Again, traditional approaches, stemming mainly from medical ethics or bioethics, are not the sole locus of inquiry here.

Risk is in fact a very important question, and in the case of Automobile, many dimensions of risk are involved. First, there is the risk that I might die from a car accident by reckless driving, something the BCI aims at preventing. Second, there is the risk of others to die from my reckless driving. In this case, we face the question of how to regulate social risks, apart from paternalistically preventing people from harming themselves as in the first example.

Next, consider privacy. Presumably there will be many data that arise and can get stored when using the BCI. Of what kind are these data? What can one do with them, for instance the car company, the BCI producer or the car insurance company? It is not only the data gathered by the device, but also the driver’s behavior generates data that can be used, for example if she turns off the BCI to finally overtake. Or should the possibility of a driver turning off the BCI be prohibited?

In other cases, such as Entertain, when BCIs are used as primary agents for an action of the mediating-type, questions of functionality and safety emerge. We would certainly not want a BCI to explode while we navigate through our TV. The same holds for BCI as secondary agent, i.e. as agent that strictly executes actions after another agent’s initiation. In Production, the BCI would not be widely used in factories if it did not do its job, namely to reliably assist the worker.

In Research, the major questions do not exactly lie in the use of the BCI device, but in the broader context of research ethics. It can be used for various goals, such as market research or research examining design features of electric seats. Also, it can be used by a totalitarian dictator, trying to find out how he can manipulate his subordinates, or it can be used by a research organization with a high reputation in a working democracy. It can involve participants who gave or did not give their informed consent, and it can be part of a well-designed study or of a fake study, carried out by malicious companies.

Now, given these questions and problems, how are we to find ethical solutions? Can the distinction between primary and secondary agent help in identifying ethical solutions to these questions? They certainly can, at least to a certain extent. As was said before, the proposed agency-distinction can help make people such as programmers and engineers be responsible for the algorithms they program. Also, managers can be made responsible for carrying out this and that kind of research, and developing this and that technological device. Finally, consumers have a choice as well, in form of the reward they give to a company or producer through their consuming choices. Even if a BCI is the primary agent, there are always others who are developing, producing and buying these agents. Unless these decisions will be made by an artificial agent, then, responsibility lies with these agents.

Crucially, however, with the secondary/primary-agent-distinction, we are freed from looking at things from an individual value-perspective and can move on to regard things from a procedural perspective. If agency is understood in these broad ways, we are brought to see different contexts and types of activities that we need to ethically evaluate, both in terms of primary and secondary agencies. Furthermore, we are brought to see that it is not only the individual questions of an agent using a BCI that are at the center of an ethical inquiry, but the ways we (as a community or society) regulate different approaches to different areas of use. What the distinction between primary and secondary agent thus aims at illuminating is the necessary shift from a mere individual ethics approach to questions of technology to a more political and social approach. This one looks at processes by which different areas of technology use are regulated, based on what questions emerge given the agency-distinction discussed above.

3 From Ethics to Politics

These ethical issues are hardly new, but a way to finally solve them is not in sight. Many books and articles have been written about every aspect of the ethical, social, and legal aspects of emerging technology. A Google Scholar search returns over 34.000 entries for the query “ethical issues emerging technology” (between 2016 and 2017). Many base their assessment on future developments and more or less realistic scenarios [13]. And even though the previous pages outlined an alternative view about agency and questions surrounding them in the case of BCI technology, they are of little or no help in finding those who are responsible for when something goes wrong with the technology, or, with a look ahead, who are responsible for getting things right. The reasons have been mentioned before: technology’s more rapid development, compared to ethical analysis; pluralism; and the nature of responsibility. Among these issues, pluralism provides the most challenging factor.

However, we might want to start thinking about whether finding such an agreement is in fact needed. Or at least, whether thinking ethically about these questions is adequate to find solutions. It seems as if many people assume that doing ethics of technology as part of the academic world will eventually lead us to the “right answer”. This is why there are so many attempts to critically assess existing and new technology, and to examine all the impacts and consequences. But it might turn out that thinking about technology is not the right way to find right answer. Perhaps technology must rather be used, experimented with, and tested. Is this not the proper use of technology, i.e. to actually use it and not (only) to think about it?

What is needed, then, is a way to allow experimentation with the development and use of technology such as BCIs. Since we cannot find all answers and not even all problems that might arise, it is crucial that there be ample room for experimentation and testing. Consequently, when it comes to deciding about this or that technology, the decider should not be the ethics community, nor the politicians who draft regulation and who are often seen as ethics executors. Rather, the deciders should be the consumers, those for whose lives technology has an impact. In other words, technology must prove successful and acceptable through market mechanisms that are free from any external regulation and attempts to guide the development of technology in a direction that appears preferred either by politicians, the industry itself or any other powerful agent other than the consumers themselves.

This proposal amounts to shifting the debate about acceptable technology from ethics to politics. It might sound surprising since markets are typically thought to be opposed to politics. However, this view overlooks a very important fact: Markets are dependent on a lot of political efforts to uphold them, just as politics is. The picture of markets used here is broader than the one used when markets are brought into opposition to politics. It is based on the view that underlying all institutions that enable and facilitate cooperation in a society are based on a deeper form of cooperation, trust and the effort to uphold these institutions. Here the call for experimentation and what [40] calls “permissionless innovation” shows its radical innovative perspective with the promise to change not only the way we deal with technology, but also with the way we deal with dealing with technology.

Arguing for an experimentation-first approach to technology development therefore means that not only finding technological solutions to current problems must be subject to market-based experimentation, but also the mechanisms to govern these processes. This is where the concept of markets, alluded to above, shows that it is deeper than the traditional one involved in opposing markets to politics. It means, for example, that jurisdictions and, more generally, regulatory institutions (law-making, administrations) need to be open to flexible experimentation. The important distinction is not so much the one between politics and markets, but rather between politics and ethics.

Whereas ethics typically tries to find criteria with which technology can be assessed, and to propose adequate measures to make sure these criteria are met – mostly legal regulation –, the political approach concentrates on other processes by which people find ways to interact with each other, propose social change and try to influence others to accept their ethical views. The important point is that it is not only coercive law-making by which these processes proceed, but by implementing social conventions. Law-making tries to fix problems by finding facts and proposing ways to deal with these facts, making these ways compulsory for everybody. The political solution, however, conceives of these facts as conventional and malleable, being dependent on interpretation and always open to being amended, redefined and negotiated [34].

It is important to bind both the market-orientation and the political approach together. To do this note, first, that “political” does not refer to the institutionalized political process as we know it and that operates under the assumption of political authority. Rather, “political” refers to the “sub-political” (in the first sense of political) processes by which social life is organized (cf. for the following [34]). These processes contain social practices, norms and expectations, in brief, social conventions. These conventions form the basis for interpreting ethical questions such as who is responsible, and the fact upon which we build our ethical assessment of responsibility. Since these social conventions are dependent upon interpretation and differ between individuals, depending on their views, beliefs and norms, they are somewhat free-floating – and malleable. The malleability of norms and ethical views requires, and justifies, a constant endeavor to be socially and politically active in favor of one’s preferred view. In brief, the political sphere wherein the regulation of social life and technology takes place, comprises efforts to shape answers to ethical questions surrounding technology by influencing rule-making through public persuasion and working on social conventions. It is not as if there was a factual answer to ground and identify responsibility. Rather, what constitutes these grounds and identifications is an open process.

Second, to enable these processes to take place, i.e. to empower people to be able to follow their preferences and norms, and to assume the work of having social influence, a free market is the best way to accomplish this. Only if interactions are based on the free exchange of ideas and the freedom to express one’s preferences through mechanisms of effectively rewarding those who share our preferences can progress be made. So, free markets should be the institution of choice when it comes to addressing the malleable and conventional nature of ethics.

Third, all of this does not rule out law-making or regulation, as should be obvious from the fact that social conventions often do and sometimes need to be codified and implemented in public law. However, here the idea behind letting markets decide about ethics must be applied to the political sphere as well. This means that the institutions of law-making and public regulation need to be chosen in and through market processes, just as the production of goods and services needs to be guided by market-based mechanisms. In other words: To not fall back into a system of coercive law and politics, trying to shape technological progress by ethical reasoning “in the dark”, we need to insert flexibility into the political system itself. As a consequence, people must be fully able to enter and exit political communities with law-making authority that transcendent currently existing national borders, and in some cases even physical borders. People must be free to join political units that sometimes do, sometimes do not depend on or cover any physical territory, but that are authorized to make laws and govern the respective communities. This ensures that people have full control over how things need to be regulated, according to their own view. It makes sure people can live in communities that express their points of views, that allow certain technologies and not others. The political approach to ethics pursued here relates back to theories in political philosophy and economy such as in [41, 42], from which important insights into feasible political structures can be derived.

While it might be possible that people form communities with others who share their views, and thus relatively homogeneous communities start to exist, including those with very poor morality, the process does not stop there. Rather, the freedom to leave political units and join others will, over the course of time, lead to an open world, where many goods and services are provided by a free market because this tends to progress much better into an ethical society than when it is directed through ethical reasoning and political implementation. This world will be one that is supported by people’s plural ethical views, and it will be stable because rules, institutions and mechanisms will have emerged that tend to enable people to prosper and flourish by giving them the freedom to shape the world according to their views.

4 Conclusion

In this paper, I have proposed a distinction between forms of agency that can be helpful in ascribing responsibility in cases where technology assumes some degree of autonomy. Responsibility is one of the major issues when it comes to assessing and evaluating technological progress. However, this account is not likely to settle questions about responsibility. The reason lies mainly in ethical pluralism. Therefore, I have proposed to shift normative reflections on ethical questions from ethics to politics. This means that to direct technology development in the right direction, it is necessary to have adequate political institutions and to assign processes of interaction and persuasion a more crucial role in finding out what is right and wrong with technology. The political of this approach thus can be found along two dimensions: the politics of social interaction, and the politics of institutional design. With these two dimensions in place, we can have trust in the development of BCIs that work for the benefit of all humans.

Further work needs to be done regarding the proper foundation of such a view, its relation to other works in political philosophy, and about its realization in the real world. Among other things, empirical work about how “permissionless innovation” in both technology and politics can and does work to the benefit of all humans is thoroughly needed. Finally, a more detailed account of how particular technologies can be developed based on this approach is also needed.Footnote 1