Keywords

1 Introduction

The web is leveraging the decentralized internet architecture but is today centralized [1, 2]. The walled gardens [3] of social web applications can be gardens with freedom and user dynamism [4] but are also limiting access to the data for the creator/data subject. Different initiatives like EU’s Next Generation Internet initiative (NGI)Footnote 1 or projects like Solid [1] are advocating a decentralized vision of the web. Decentralizing the web facilitates inter- and exchangeability of system parts, partners and providers. Yet, decentralization introduces trust challenges due to many potentially unknown parties.

Building applications in a decentralized web are challenging web engineers in a new way, especially according to trust questions. As the decentralization is bringing in more privacy and freedom for data [1], web engineers will have to work with a different view on data. Data can come from anywhere and thus it is highly questionable if this data is correct and harmless or wrong, misleading and even harmful [5]. Due to the big amount of data in the web, these trust decisions cannot be made by human experts but by autonomous agents.

These trust decisions should not be evaluated on a static trust relationship, nor only on a relationship certified by any external authority. As the agents should be autonomous to proceed with the decentral concept of the web, an external authority would bring back the pilloried centralization. The dynamic evaluation of trust will give certain advantages about the fast-changing unknown parties, which can even change their behavior in specific contexts after being trusted in the first place. Thus, the autonomous agents should be able to work with dynamic trust relationships, which are content- and/or context-related and not dictated by another entity.

In the following three use cases with different complexity are presented, and their similarities are analyzed. The paper goes on with a description of the research objectives in Sect. 3, and a related work in Sect. 4. It concludes in Sect. 5 with the research agenda.

2 Use Cases

Use Case: Solid.

Solid is a well-known project which tries to give back control over the data to the creator/data subject by decentralizing online data storages [1] called pods. As everyone should be free to bring his pod to any application, Solid is separating the data from the application. Therewith Solid enables two novel business models for web applications: (1) the data management layer with pod hosting/providing, and (2) the application business itself avoiding data silos. Yet, there is no clear mechanism how to decide if certain data should be trusted by any application nor if the pod should accept new data from a specific application in/with a specific context or content.

Use Case: Smart Cities.

The digitalization of cities also includes trustworthy systems in domains like energy distribution and public/personal traffic management. As such systems should ensure a trustworthy behavior and a respective comfortable usage, several autonomous decisions must be made at runtime. The autonomous agents making these decisions must be able to react to unpredictable events in their environment. It is required to observe how trustworthy each input data is and if it should be considered for the decision. Thus, they must ensure the overall trustworthy behavior at each intermediate decision. Otherwise, the system declaration as behaving trustworthy can be jeopardized by decisions, which were made on non-trusted or distrusted data.

Use Case: Goods Transportation.

Within the goods transport sector, delivery logistics is a complex process with manual planning beforehand. It lacks an optimized dynamic, autonomous, secure and trustable way of conceptual linking one delivery to a carriage within a transportation system. The dynamic interchanging of goods in between carriages, the dynamic separation of one delivery in parts, the inferring remerging of one delivery and the dynamic separation of each carriage are also closely connected aspects at the goods transport. All these aspects need a special consideration upon trust mechanisms when it comes to an AI controlled logistics planning and execution. Regulating everything in detail without individual autonomous decisions will not support autonomous and dynamic transportation of goods.

Analysis of Use Cases.

All three use cases have in common that they are separating two layers by introducing decentralization with autonomous decisions at runtime These two layers were considered as one, but the decentralization is acquiring the need to separate them. These two layers subsequently cause new trust challenges. Solid is e.g. separating the data layer from the application layer. In the context of smart cities, the autonomous agents’ decision is separated from the outer view of the system. And at the goods transportation, the routing and delivering of goods and its actual transportation methods are separated. Without consideration of trust, the decentralization would decrease the trustworthy behavior of all use cases and respectively the security.

If the trust is evaluated only once for each participating party, all use cases would lack the possibility that context and content can change, and that the party could change its behavior after some time passed. Thus, all use cases benefit to evaluate the trust for each changed content and context. As the content and context can change between two communication parties without the participation of any authority, authority for specific trust mechanism would cause not only an undermining of the decentralization but also a shift in the point of view about trust. The trust would respectively not fit to a specific agent, but to the authority. Thus, all use cases require a framework which is useable for all agents and based on a respective trust model.

To share information and knowledge between all participating parties, services, and sensors, it is for all use case suitable to use linked data. Hereby, participating entities do not have to stare the complete set but can leverage on the distributed and decentral description of data. While the use case of Solid is even conceptual based on linked data, the other use cases also benefit from the usage of linked data.

3 Research Objectives

The goal of this doctoral workFootnote 2 is to support content- and context-related trust in open multi-agent systems using linked data. Open multi-agent systems use many independent agents, which collaborate to achieve a common purpose, e.g. to prevent traffic jams in a city while ensuring the fastest route for all individuals. The aspect of openness describes that all agents can join and leave the system without restrictions or influence by/on other agents, e.g. any application provider using Solid can enter or leave the Solid ecosystem.

To support web engineers in building trustworthy applications in a decentralized web, the uncertainty about the inclusion of foreign content has to be abolished, i.e. content-related trust is required. Agents can change their behavior in a decentralized system without perception by other agents, thus incoming content must be checked for each individual communication. Subsequently, such checks could change the trust in another agent, which infers a trust check when commencing a communication and not only once per sensing a new agent.

Trustworthiness depends on the communication context, i.e. context-related trust is required. The context comprises agent context, situation context, system context, and temporal context. The agent context is describing the agent’s preferences and capabilities. The situation context is describing the goals, the availability, and the communication sequence of an agent and its involved peers, while the system context comprises the entire system. The temporal context intersects the context types since context properties of all other context types are related to the time dimension. As context is emergent and can therefore not be predicted nor predefined. Thus, the understanding of the context is an important aspect for solving the trust uncertainty.

To reach this objective, a respective framework has to be developed. As notifiable in Fig. 1, trust models should be used as a basis to dismantle the uncertainty in the introduced use cases but are having gaps. These gaps are demonstrated by the use cases as mentioned in the Analysis of Use Cases. A framework for the respective trust model in linked data is required to be developed, such that the agents in the use cases can use it. Thus, it will base on the trust model(s) and will be applied to the use cases for evaluation purposes. To utilize trust establishment in the correct way, the framework must solve the demonstrated gaps with a correct trust model underneath but also the requirements specified by the gaps. It is envisaged to find or create the perfect trust model, but the framework could also exchange the underlying model from scenario to scenario.

Fig. 1.
figure 1

Solution concepts

4 Related Work

Policy- and Reputation-Based Trust.

Trust inferences based on policies or strict security mechanism can be grouped as policy-based trust [6]. Trust is here compounded “by obtaining a sufficient amount of credentials pertaining to a specific party, and applying the policies to grant that party certain access rights” [6, p. 59]. Another type of trust establishing is called reputation-based trust [6], where the reputation of others is used to infer trust. Thereby a web of trust [6] is established without any authority.

Trust Models.

To further compare trust values a computational trust model is required. Recent work shows that a lot of different models exist for specific scenarios [7, 8]. Cao et al. [9] introduce a model, which is very close to the mentioned use case in smart cities, where the sharing of data in such a city is modeled with regards to transparency, accountability, and privacy. Falcone and Castelfranchi [10] are “dealing with the dynamic nature of trust, and making the realization that an agent that knows he’s trusted may act differently from one who does not know his level of trust” [6, p. 65]. Besides the computational models of trust, the meaning of trust is leveraging out of social sciences, and their respective modeling of trust [11].

Content Trust.

Since the mentioned problem of this doctoral work requires to generate dynamic trust relationships within linked data the approach of content trust [12] is very important for this work. It changes the stasis of once evaluated trust relations to dynamic ones with regard to the mentioned content. But this approach is establishing a trust to another agent’s content, while it lacks aspects like forgiveness, regret, distrust, mistrust and a cooperation threshold like specified by Marsh and Briggs [8].

Trust in Multi-agent Systems.

As all three named use cases in Sect. 2 consider many agents in one system the interactivity between those agents also influence trust. Such multi-agent systems have respectively also to talk about the interference of others and the knowledge that others are trusting an agent in regard to the agent’s behavior [13]. Huynh et al. [14] are already mentioning the uncertainty in open multi-agent systems and presenting an integrated trust and reputation model. Yet, they have issues regarding lies and do not consider a change of trust after first trust establishment.

5 Research Agenda

Trust Models Selection.

At first, the correct computational trust model(s) must be found. Respective survey research about available trust models with regards to identify, analyze and evaluate them is the intended start as in Fig. 2. This survey may show the need to create a new trust model to fit in the content and context relations, which benefits from the survey to not develop a new one from scratch. But as the purpose of this doctoral work is not the creation of a new trust model, the intention is to combine available trust models. As already mentioned in 0, this survey could also come to the result that several models are important and have to exchanged from use case to use case.

Fig. 2.
figure 2

Research agenda

Problem Analysis.

The framework design needs to fit the actual problem. The requirements can be found with a problem analysis with respect to the observed trust models’ gaps. Some requirements are already written down within the problem statement. Yet, there could be more as the framework should be integrated into the use cases and their specific multi-agent systems.

Framework Design.

With the problem analysis finished, the conceptual design of the framework can be started. A corresponding first prototype will be implemented for the utilization of the framework and its further evaluation.

Evaluation and Utilization.

After the framework has a clear conceptual design with respect to its requirements, the framework is required to fit into the use cases. Therefore, the framework will be implemented and reworked by including it in each use case. The framework will hereby be included in one use case after another. Every use case utilization is thus improving the framework itself with a small evaluation in a specific scenario. After three successful use case integrations, the framework will be further evaluated with a focus on all requirements of the problem analysis.