Keywords

1 Introduction

In our network society, wrong information is sometimes unfortunately distributed. For example, when a witness tells the information with individuals close to himself after obtaining new information, they form their own opinions and share the information forward. Repeating this process, all individuals in the network shared with the information based on mutual relationship between them. However, such an information is not always correct due to a lack of the mechanism which can filter conflicted information to share the correct opinion. In order to solve such a problem, Opinion Sharing Model (OSM) [3] was proposed for analyzing sharing opinion in a distributed network composed of multi-agents [2]. In OSM, most agents form their opinions according to the neighbors opinions which may be incorrect while a few agents only can receive outside information which is expected to be correct but may be incorrect with noise. To increase the accuracy rate of sharing correct opinion, Autonomous Adaptive Tuning (AAT) [6] was proposed, but AAT only works well under the condition where all agents have not yet formed their opinions, which is highly impossible in real society, i.e., all individuals has their own initial opinion more or less towards a topic. To tackle this problem, this paper proposes Autonomous Adaptive Tuning Dynamic (AATD) in the network where initial opinions of all agents are determined.

The paper is structured as follows. Section 2 starts by describing Opinion Sharing Model, and Sect. 3 explains AAT. Section 4 describes the proposed AATD and its modifications. Section 5 shows the experimental results and Sect. 6 discusses them. Finally, Sect. 7 concludes this work.

2 Opinion Sharing Model

2.1 Sensor and Normal Agents

For capturing dynamics of sharing opinion in distributed network system, Glinton proposed Opinion Sharing Model (OSM) composed of multi-agents. The agents constitute their complex network and communicate their opinion with each other. Concretely, the agents receive the opinions from the directly connected agents (i.e., neighbor agents), form their opinion referring to the received opinions, and send their formed opinions to the neighbor agents. By repeating this process, the agents eventually share the same opinion among them. What should be noted here is that, a few agents (called the \(sensor\ agent\)) who have the sensor function can observe the information from the environment in addition to the opinions from the neighbor agents, while most agents (called the \(normal\ agent\)) who do not have the sensor function can only receive the opinions from the neighbor agents. Since the observation of the sensor agents are not accurate, they may form the incorrect opinion, which are spread in their network.

2.2 Description of Opinion Sharing Model

In OSM, the network G(AE) consists of a large set of agents A and a set of their connection E as described in Eqs. (1) and (2).

$$\begin{aligned} A = \{i^1 ... i^N \}, N=|A| \end{aligned}$$
(1)
$$\begin{aligned} E = \{ (i,j):i,j \in A \} \end{aligned}$$
(2)

In the above equations, \(i (\in A)\) indicates the agent (i.e., the normal or sensor agent), \(i^1\) indicates the \(1\ th\) agent. \(N (= |A|)\) indicates the number of the agents, and (ij) indicates the connection between the agents i and j. In OSM, \({(i,j)} = {(j,i)}\) because the connection is undirected. \(D_i=\{j:\exists (i,j)\in E\}\) indicates the number of neighbor agents of the agent i. A set of sensor agents are represented by \({S \subset A | S \ll N}\). For simplicity, the sensor agents can only observe environment binary information B(= \(\{white,black\}\)). Note that such a simplification satisfies the purpose of capturing the dynamics of opinion sharing in complex networks [7].

When the agent i receives opinions from its neighbor agents, the agent updates its belief \(P_i(b = white)\) or \({P_i(b = black) = 1 - P_i(b = white)}\), which are the possibilities of believing white / black as the correct opinion. After updating the belief, the agent forms its own opinion \(o_i\) if its belief exceeds the threshold \(\sigma \) or \(1 - \sigma \). For clear understanding, we visualize the agent with belief and opinion in Fig. 1. In this Fig. 1, the upper semicircle represents the belief area while the lower semicircle represents the opinion area. In the belief area, the thick black bar (which length is larger than the radios of the circle) represents the current belief while the thin black lines with the fixed intervals indicate the degree (location) of the belief which can be changed through its update. In the opinion area, on the other hand, the color of the lower semicircle changes when the agent forms its opinion. For example, if the agent forms \(o = white\), the color of the opinion area becomes white.

Fig. 1.
figure 1

Visualization of the agent

The belief of the agent is updated from its previous value according to Bayes’ theorem as shown in Eq. (3). In this equation, \({P_i^k}\) and \({P_i^{k-1}}\) respectively indicate the current (\(k\ th\)) and previous (\(k-1\ th\)) belief of the agent i when the current step is represented by k. When the sensor agents observe the information from the environment, its belief is updated according to the accuracy \(r (0.5 < r \ll 1)\). When the normal agent receives the information from its neighbors, on the other hand, its belief is updated according to the importance levels \(t(0.5 < t \ll 1)\) instead of r in Eq. (3), which indicates the influence of neighbor agents on the normal agent. For example, the belief is not updated in the case of \(t = 0.5\), meaning that the agent ignores the received opinions, while the belief turns \(P_i^{k} = 1(or\ P_i^{k-1})\) in spite of value of \({P_i^{k-1}}\) in the case of \(t = 1(or\ 0)\).

$$\begin{aligned} P_i^k=\frac{C_{upd}P_i^{k-1}}{(1-C_{upd})(1-P_i^{k-1})+C_{upd}P_i^{k-1}} \\\nonumber where\left\{ \begin{array}{lll} C_{upd}=r &{} if &{} s_i = white\\ C_{upd}=1-r &{} if &{} s_i = black \end{array} \right. \end{aligned}$$
(3)
$$\begin{aligned} O_i^k = \left\{ \begin{array}{lll} undeter, initial, &{} if &{} k=0\\ white, &{} if &{} P_i^k \ge \sigma \\ black, &{} if &{} P_i^k \le 1-\sigma \\ o_i^{k-1} &{} otherwise &{} \end{array} \right. \end{aligned}$$
(4)

Figure 2 shows the changes of the belief until the agent forms its opinion. The figure shows that the agent updates belief P when receiving opinions from the neighbor agents and forms the white opinion when \(P > \sigma \). In detail, the thresholds \(\sigma \) and \(1-\sigma \) are the confidence bounds, which range is \(0.5<\sigma <1\) where the agents have confidence that the agent can form its opinion.

Fig. 2.
figure 2

The agent’s belief P is updated when receiving opinions from the neighbor agents. The agent forms the white opinion when \(P > \sigma \), while, the black opinion when \(P < 1 - \sigma \)

Figure 2 shows the sharp hysteresis loop of the function of updating opinion proposed by Pryymak et al. [6]. The belief \(P_i^k\) of the agent i should be less than \(1 - \sigma \) (larger than \(\sigma \)) to change the opinion to black (white) after it forms the opinion white (black) (Fig. 3).

Fig. 3.
figure 3

Update rule of opinion

2.3 Performance Metrics of a Model

The model is simulated until dissemination rounds \(M=\{m_l: l\in 1...|M|\}\) end. Every round update belief step \(k \in K\). At the end of each round \(m_l\), the conclusive opinions are observed. Each round is limited by the enough step which the agents converge to their own opinion. When each round finishes, the current state expires. After the new round start, the agents reset their opinion and belief. Algorithm 1 indicates opinion sharing by the agents in OSM.

figure a

In order to measure the average accuracy of the opinions of the agents at the end of each round, Glinton proposed the accuracy metric which is the proportion of the agent numbers that form correct opinion in the community [4].

$$\begin{aligned} R=\frac{1}{N|M|}\sum _{i\in A}|{m\in M:o_i^m=b^m}|\cdot 100\% \end{aligned}$$
(5)

Furthermore, Pryymak proposed performance index for the single agent [6]. When its opinion is formed correctly, the agent cannot perceive. Therefore, the agents should be conscious of how often their own opinion is formed correctly. Pryymak denote it as the awareness rate of the agent \(h_i\).

$$\begin{aligned} h_i=\frac{|{m\in M:o_i\ne undeter.}|}{|M|} \end{aligned}$$
(6)

This myopic metric can be calculated locally by each agent and it is important metric for AAT algorithm that is described in Sect. 3.

3 Autonomous Adaptive Tuning (AAT) Algorithm

In this section, we explain Autonomous Adaptive Tuning (AAT) algorithm. The algorithm is designed for improving the accuracy R by communicating the agent’s opinions with each other in the various complex networks. In this algorithm, the agents automatically update their belief only relying on the local information. This algorithm is as follows. The accuracy R increases when the dynamics of the opinion sharing is in the phase change between the stable state (when the opinions are not shared out in the community \(\forall i\in A:h_i\ll 1\)) and the unstable one (when the opinions are propagated on a large scale \(h_i=1\)). Accordingly, it is necessary that the agents share each opinion in smaller groups before large cascade occurs without reacting to the incorrect opinions in surplus. In order to set optimum parameter of the issue, this algorithm regulates importance levels of the agents severally.

3.1 Description of AAT

This algorithm has three stages for tuning importance levels. Also Algorithm 2 shows that how AAT runs in OSM.

  • The each agent running AAT build candidates of the importance levels to reduce the search space for the following stages. This step runs only one at the first time of the experiment. (BuildCandidate())

  • After each dissemination round, the agent estimates the awareness rates of the candidate levels that are described in Sect. 2.3. (EstimateAwarenessRate())

  • The agents select the importance level by estimated the awareness rates of the candidate levels for next round. Then the agents consider how close it is to the target awareness rates. It is necessary that the importance levels are tuned gradually while considering an influence of own neighbors. (SelectImportanceLevel())

In the following sections, we describe three stages of AAT algorithm in detail.

figure b

3.2 Candidate Importance Levels

In this section, we describe how the agent running AAT estimates the candidates of importance levels \(T_i\) in Eq. (7). By estimating the set of candidate importance levels, the agent reduces the continuous problem of selecting the importance level to use, \(t_i\), from the range [0.5, 1] to a discrete problem. Through the number of the sensor agents is much smaller than the total number of the agents, we focus on the normal agents that update their belief using opinions of only neighbor agents. Figure 4 describe the sample dynamics of the agents belief, where the agent i receives black opinions \(o(b = black)\) and more white opinions \(o(b = white)\). Starting from its prior \(P_i\), the agent receives 6 black opinions from its neighbor agents, update its belief until it exceeds \(1 - \sigma \) and form the black opinion. After that, the agents receive 11 white opinions, update belief until it exceeds \(\sigma \) and form the white opinion. In this dynamics, the most important moment is \(k = 6\) and \(k = 21\) because it is only time the agent sends new opinion to its neighbor agent. Consequently, we focus on how many times the agent updates its belief until changing the own opinion. According to the opinion update rule in Sect. 2, we consider the case when the belief of the agent match one of the confidence bounds \(P_i^k\in \{\sigma ,1-\sigma \}\). If we consider that the maximum number of opinions that the agent can receive is limited to the number of its neighbors, \(|D_i|\), we can decrease the number of candidates of importance levels. The agent should find the importance levels as its belief coincides with one of the confidence bound \(P_i^l\in {\sigma ,1-\sigma } in l\in 1...|D_i|\) updates (see Eq. (3)). After solving this problem, the agent can get set of the candidate of importance levels that lead to opinion formation by receiving \(1...|D_i|\) opinions.

$$\begin{aligned} T_i=\{t_i^l: P_i^l(t_i^l)=\sigma ,l\in 1...|D_i |\}\cup {t_i^l: P_i^l(t_i^l)=1-\sigma ,l\in 1...|D_i|} \end{aligned}$$
(7)
Fig. 4.
figure 4

The agent i with initial belief \(P'_i\) firstly receive 6 black opinions and secondly 15 white opinions.

Consequently, the set of candidate importance levels is limited to twice the number of neighbors, \(|T_i|=2|D_i|\). This is the necessary and sufficient set of candidate importance levels in which the agent forms the opinion after different update steps and it should be initialized only once. After this stage, the agent has to estimate the most optimal importance level from its set of candidate importance levels.

3.3 Estimation of the Agents Awareness Rates

In this section, we describe how the agent selects the importance level that the network achieves high R. As mentioned above, AAT is based on the observation as follows, the accuracy R of the network improved when the opinion sharing dynamics is in the transition phase between stable state and unstable one. If all agents select minimal importance levels of candidates, only few agents closer to the sensor agents can form the opinion and most of the rest can’t. In the other hand, if all agents select the maximum importance levels of candidates, the incorrect observation of the sensor agent may be shared in the network. In order to estimate the optimal parameters about the importance levels, the agents have to select the minimal importance levels to form their opinion. In the OSM, there are two things below, such that in order to maximize the accuracy R.

  • Each agent has to form its opinion. Consequently, each agent should reach a high level of its awareness rate \(h_i\), because the agents without determined opinions drop in the accuracy of the community.

  • Each agent has to form the opinion as late as possible with only local information, after the agent gathers the maximum number of the opinions of the neighbors.

To satisfy these terms, the agent has to select the minimal importance level \(t_i^l\in T_i\) from the candidates, such that it can form its opinion \((h_i=1)\).

However, since the sensor agents observe by random noise, the dynamics of opinion sharing like phase transition behaves stochastically. The agents cannot form their opinion until the opinions are shared on the large scale and their awareness rate are suffered. The agents should select the requisite minimum importance levels \(t_i^l\) from the candidates \(T_i\) to make sharing correct opinion successful. Then the awareness rate of the importance levels approaches the target awareness rate \(h_{trg}\). The target awareness rate is slightly lower than maximum, \(h_i=1\). Each agent solves the following optimization problem:

$$\begin{aligned} t_i={\mathop {\hbox {arg min}}\limits _{t_i^l\in T_i}} \,|h_i(t_i^l)-h_{trg}| \end{aligned}$$
(8)

We need to calculate \(h(t_i^l)\) using \({t_i^l \in T_i}\) in Eq. (8). But we can only know the awareness rate with the importance level that the agent selects in the round. So Pryymak proposed means of estimation about other awareness rate \(h(t_i^{l})\) from analyzing the agent’s belief update in opinion sharing in two cases. Following are them.

  • Case 1 (the agent formed his opinion)

    If the agent form opinion \({o_i^m \ne undeter}\) with \({t_i}\) in round m, all the importance levels \({t_i^l \ge t_i}\) higher than the selected importance level will be easier to form opinion \({|P_i(t_i^l)| > |P_i(t_i)|}\).

  • Case 2 (the agent received the opinions)

    If the agent can’t form the opinion, we can compare the number of updates it has observed and the number required for the candidate level, \(t_i^l\), to form the opinion. Specifically, the minimal number of belief updates required to form the opinion with the candidate level, \(t_i^l\), can be calculated by recursively updating the agent’s belief starting its initial belief \(P'_i\) until it exceeds one of the confidence bounds: \([1 - \sigma , \sigma ]\) for updates with \(t_i^l\). Pryymak denote this function as \(u(t_i^l, P'_i, {\sigma })\). At the same time, during the round the agent can observe the maximum number of updates starting \(P'_i\) that denoted this value as \({u}_i^m\).

In Fig. 4, it is received on the last belief update step, \({u}_i^m = |21 - 12| = 9\). If the number required for \(t_i^l\) is smaller \({u}_i^m\), these \(t_i^l\) will lead the opinion formation.

Combining these cases, Pryymak proposed the boolean function OpinionFormed. The function returns True if the agent might have formed the opinion or received the opinions from the neighbor in the current round, m using the importance level \(t_i^l\) with the actual importance level \(t_i\):

$$\begin{aligned} OpinionFormed(t_i^l, t_i, m)&= \Big ( o_i^m \ne undeter \wedge t_i^l \ge t_i \Big ) \nonumber \\&\ \vee \Big ( u_i^m \ge u(t_i^l, P'_i, \sigma ) \Big ) \end{aligned}$$
(9)

To estimate the awareness rates for the candidate levels, Pryymak proposed the estimate awareness rate \(\hat{h_i}\) using Eq. (6). The measure shows the proportion that the agent will form the opinion with \({t_i}\) in all the round.

$$\begin{aligned} \hat{h_i}=\frac{ |{m\in M:OpinionFormed(t_i^l, t_i, m) = True}|}{|M|} \end{aligned}$$
(10)

Algorithm 3 describes the means of the estimation of awareness rate. If the agent can’t receive the opinion in the round, the agent can’t form the opinion with any importance levels (Line 1). if the agent receives any opinions, it update each awareness rate in the candidates of importance levels according to Eq. (9) (Line 1). When \(OpinionFormed(t_i^l, t_i, m) = True\ or\ False\), the agent update the estimate awareness (Line 4, 5).

figure c

3.4 Stratagem of Select Importance Levels

The agent affects the dynamics and the awareness rates of all agents with the interdependence of the opinion of the agents and the neighbors. If the agent greedily selects the optimal importance level following the definition of its optimization problem (Eq. (8)), it may extremely change the local dynamics of the network. The agent has to select a strategy without dramatic changes in its dynamics, in order to estimate awareness rates of the network accurately and solve faster. To select such the strategy, the agent has to focus on the fact as follows. The awareness rate of the agents for its importance levels increase monotonously. Because the minimum importance level \(t_i^{min}\) requires many updates against the maximum importance level \(t_i^{max}\), if the importance levels are sorted in ascending order. In this fact, the agent employs a hill-climbing strategy. If the awareness rate of the current importance level \(t_i=t_{i_l}\) is lower than the target \(\hat{h_{i_l}}<h_{trg}\), the agent employing the hill-climbing strategy increases the importance level to closest lager one (it i.e., \(l=l+1\)). If the awareness rate of the close importance level is lower than the target \(\hat{h}_i^{l-1}>h_{trg}\), the agent uses this importance level in the next round (i.e., \(l=l-1\)). The agents employed the hill-climbing strategy deliver the higher accuracy than the greedy strategy [6].

figure d

4 Autonomous Adaptive Tuning Dynamic (AATD) Algorithm

AAT can not learn the importance levels in OSM with the initial opinions. AATD is an algorithm that realizes accurate opinion sharing regardless of whether it has initial opinions by adding changes to AAT. AATD is modified on estimation of the awareness rates of AAT.

4.1 First Modification of Estimation of the Agents Awareness Rates

In the network where all agents already formed opinions, the agents with AAT always judge they are forming opinions and update \(\hat{h}_i^l\) from Eq. (10). At the result, the awareness rates of all importance levels in candidates become 1 and is fixed. That is because AAT calculates awareness rates from whether the agents form opinions or not. So, the agents with AAT can’t learn the appropriate importance level. From this reason, as a first improvement, AATD calculates the awareness rate from whether the agent change his opinion or not. With this improvement, the problem that awareness rates is fixed to 1 is solved. Figure 5 shows that AAT updates the awareness rate only when the agent determine the opinion from undetermined state (a) but AATD updates the awareness rate when the agent change their opinion state (a), (b) and (c). The red line is the range of AAT can update the awareness rate. The green line is the range of AATD can update the awareness rate.

Fig. 5.
figure 5

The range of updating awareness rate in AAT and AATD (Color figure online)

4.2 Second Modification of Estimation of the Agents Awareness Rates

However, there is still a problem that the agents can’t learn the appropriate importance level. When updating awareness rates based on the number of times of received opinions, there is a possibility of updating the awareness rates of the importance levels with that the agent can’t form any opinions. And the agent may selects the inappropriate importance level. To solve the problem, AATD calculates the awareness rate based on the importance levels with that agents can form opinions updating the number of received opinions. Figure 6 indicates that AAT updates the awareness rates of the importance levels with that the agent can’t form the opinion and AATD updates only awareness rates of the importance levels with that the agent can form the opinion, updating the number of 5 received opinions (\(u_i^m = 5\)). Left table means the candidates of importance levels in the agent i. \(u(t_i^l, P'_i, \sigma )\) is the number of updating belief \(P_i\) from initial belief \(P'_i\) to reach \(\sigma \) or \(1-\sigma \) with candidate \(t_i^l\). For example, 0.973 of Black Opinion Formed and \(u(t_i^l, P'_i, \sigma ) = 1\) means the agent form the black opinion when received 1 black opinion. On the other hand, 0.700 of White Opinion Formed and \(u(t_i^l, P'_i, \sigma ) = 1\) means the agent form the white opinion when received 1 white opinion. The area surrounded by the red line is the range of update in AAT. The area surrounded by the green line is the range of update in AATD. Blue area is the range that the agent can form the opinion and Orange area is the range that the agent can’t form the opinion. Right illustration of the agent means the agent select \(t = 0.671\).

Fig. 6.
figure 6

Candidates of the agent (the agent can form the opinion with the importance levels in blue area and can’t form the opinion with the importance levels in the orange area) and the view of the agent with \(t = 0.671\). (Color figure online)

4.3 AATD Estimation of the Agents Awareness Rates

Below are the criteria used for estimating the awareness rate of AATD.

  • Case 1 (the agent formed his new opinion)

    If the agent form new opinion \({o_i^m \ne init}\) with \({t_i}\) in round m, the all higher importance levels \({t_i^l \ge t_i}\) than the selected importance level will be easier to form the new opinion \({|P_i(t_i^l)| > |P_i(t_i)|}\).

  • Case 2 (the agent received the opinions)

    Suppose the agent receives the opinion but can not form the new opinion. In that case, the agent will be able to form the new opinion if it uses the importance levels higher than the importance levels to form the new opinion by updating the number of received times. Let \(t(u_i^m)\) be the importance levels to form the opinion by updating the number of received times \(u_i^m\).

    If the agent can’t form the opinion, we can refer to comparing the number of updates it has observed and the number required for the candidate level, \(t_i^l\), to form the opinion. Specifically, the minimal number of belief updates required to form the opinion with the candidate level, \(t_i^l\), can be calculated by recursively updating the agent’s belief starting its initial belief \(P'_i\) until it exceeds one of the confidence bounds: \([1 - \sigma , \sigma ]\) for updates with \(t_i^l\). Pryymak denote this function as \(u(t_i^l, P'_i, {\sigma })\). At the same time, during the round the agent can observe the maximum number of updates starting \(P'_i\) that denoted this value as \({u}_i^m\). Candidates \(t_i^l > t(u_i^m)\) is likely to form the opinion.

We redefine OpinionFormed as \(AATD\ OpinionFormed\) based on the criteria for estimating the above new awareness rate.

$$\begin{aligned} AATD\ OpinionFormed(t_i^l, t_i, m)&= \Big ( o_i^m \ne \underline{init} \wedge t_i^l \ge t_i \Big ) \nonumber \\&\ \vee \Big ( \underline{t(u_i^m) \le t_i^l} \Big ) \end{aligned}$$
(11)

5 Experiments

To evaluate an effectiveness of AATD, we conduct the experiments. In the experiments, we evaluate the accuracy R of different network topology with \(N \in {100...2000}\) which expected degree \(d = 6\). We consider the following network typologies widely used in this paper: (1) a small-world network with \(p_{rewire} = 0.12\) of randomized connections [5]. (2) a scale-free network [1]. New opinions are introduced through a small number of sensors (\(|S| = 0.05N\) with accuracy \(r = 0.55\)) that are randomly distributed across the network. To simulate a gradual introduction of the new opinions, only 10% of sensors make the new observations after the preceding opinion cascade has stopped. Finally, all agents are initialized with the same confidence bound \(\sigma = 0.9\), initial opinion \(o_i^1 \in {undeter, white, black}\), and individually assigned priors \(P'_i\) that are drawn from a normal distribution \(N(\mu = 0.5, s = 0.1)\) within the range of the confidence bounds \((1 - \sigma , \sigma )\) with \(o_i^1 = undeter\). With \(o_i^1 = [white, black]\), \(P'_i\) is biased by \(o_i^1\). In every round m, Each round stops after 1000 sensors’ observations. After this number of observations, the opinions of the sensor agents converge to the true state and the sharing process stops. In the end of each round agents reset their beliefs and opinions to the initial values. AATD and AAT tune the importance levels in the 200 rounds, then the metrics are measured in every rounds.

5.1 Selection of Target Awareness Rate

We experiment AAT with the different \(h_{trg}\) to decide the optimal value for sharing correct opinion. The result is shown Fig. 7. The vertical axis is accuracy R and the horizontal axis is the target awareness rate \(h_{trg}\). The blue line is R in scale-free and the orange line is R in small-world with AAT. The result indicates that R is highest accuracy rate in \(h_{trg} = 0.85\) in small-world, \(h_{trg} = 0.9\) in scale-free. We used 0.85 as \(h_{trg}\) in the following experiments because we want to focus small-world property in small network.

Fig. 7.
figure 7

The accuracy depend on the target awareness rate \(h_{trg}\) (10 instances of each topology with \(N = 1000\) and \(d = 6\)) (Color figure online)

5.2 Accuracy of Opinions with Network Size

Figure 8 shows accuracy R of AAT and AATD when all agents have no opinions or initial opinions with \(h_{trg} = 0.85\). The vertical axis is accuracy R and the horizontal axis is network size. The blue, the orange and the green line is result when all agents have the white opinions (correct initial state), the black opinions (incorrect initial state) and no opinions (undetermined initial state) respectively. With undetermined initial state, the accuracy R of AATD is almost same as AAT with respect to all network sizes (the green line in (a), (b), (c) and (d)), whether small world or scale free and the accuracy R is around 0.7. When the network size is 50...100, R is around 0.7 regardless of initial opinion state in AATD ((b) and (d)). In AAT, there is a large difference between R with determined initial opinions ((a) and (c)). Therefore, it can be said that it is stable at around 0.7 in AATD while unstable in AAT in a network with small size. However, when the network size is higher 200, the accuracy R of AATD with white initial opinion decreases greatly.

Fig. 8.
figure 8

The accuracy depend on the initial opinion (whiteblackundeter) and network model (Color figure online)

6 Discussion

6.1 Effectiveness of AATD in Small Network

The agents with AATD can share the correct opinion in small network, unlike AAT, especially when the agents of network have the initial opinions. If all agents with AAT have the initial opinions, Eq. (9) returns true because of \(\Big ( o_i^m \ne \underline{undeter} \wedge t_i^l \ge t_i \Big )\) in the equation. In this situation, the agents with AAT cannot learn the importance levels in order to change their own initial opinions. Figure 9 shows all importance levels which the agent with AAT has when all agent’s initial opinions are white. The figure shows candidates of importance levels of the agent in Round 1 and 200. There are the importance levels in the blue area and the awareness rates of the importance levels in the orange area, the importance level surrounded by the red line is one that the agent selects. Figure 9 show that two candidates of the importance levels of the agent in round 1 and 200. A place surrounded by the red line with the black arrow pointing in the two candidates is the pairs of the selected importance level t and the awareness rate of t. In Fig. 9, the agents always select \(t = 0.504\) as its selected importance level and in round 200, the awareness rates of all importance levels become 1 because the agents with AAT have formed white opinion from round 1 from above reason. From this result, the agents with AAT cannot learn the importance levels in this situation. Figures 10 and 11 shows the rate of the opinions which the agents with AAT and AATD formed for each round in small network (the network size is 100). The left, center and right side of these figures show the rates of the opinions when all agents have no opinions (undetermined initial state), white opinions (correct initial state) and black opinions (incorrect initial state) respectively in the first round. The vertical axis is the rate of the correct opinions (Accuracy R) and the horizontal axis is the round number. The blue, the orange and the green areas indicate the rate of the correct opinions (white opinions), incorrect opinions (black opinions) and undetermined situation, respectively. Note that the white opinion is the correct opinion in this situation. From these figures, most of agents with AAT have been shared the correct opinions with correct initial state at the beginning of the rounds, while they cannot share the correct opinions with incorrect initial state. On the other hand, the agents with AATD can form the correct opinions in both situations. In addition, the agents with AAT and AATD form the correct opinions to same extent with undetermined initial state which situation originally assumed by AAT. From the results, the agents with AAT can not learn the importance level along to each situation, while the agents with AATD can learn and share the correct opinions in small network.

6.2 Ineffectiveness of AATD in Big Network

Generally, the agents with AATD search the appropriate importance level in order from the small value of the candidates of importance levels, and select the importance level whose awareness rate is nearest \(h_{trg} = 0.85\), but less than that value. Since all agents can select the large value of the importance level to share the opinion by this mechanism, AATD can make the agents be able to form the correct opinion. However, when the network size is large, the agents far from the sensor agents can’t receive opinions in early rounds, and can’t update the awareness rates until the neighbor agents form opinions. Since the denominator of Eq. (10) is the number of the rounds, the agents update awareness rates decreasing in this situation. At the result, the agents can’t have the awareness rate nearest \(h_{trg} = 0.85\), and always become selecting the maximum importance level. That is, the agents become easily influenced from received opinions, and the agent network become very weak for wrong observation from some sensor agents. Figure 8 indicates that with correct initial state (white), as the network size increases, accuracy R suddenly drops in AATD.

Fig. 9.
figure 9

Candidates of importance levels of the agent in Round 0 and 200 in AAT

Fig. 10.
figure 10

AAT learning results with different initial settings (network size = \(100h_{trg} = 0.85\)) (Color figure online)

Fig. 11.
figure 11

AATD learning results with different initial settings (network size = \(100h_{trg} = 0.85\)) (Color figure online)

6.3 Relationship Between Parameters of Network Model and Effectiveness of AAT and AATD

The network structure of small-world model varies with the value of rewire probability \(p_{rewire}\) and average degree d. We verified the robustness of the Accuracy R for these parameters with small network size (N = 100) provided by AAT and AATD, changing these values. Figure 12 indicates the relationship between d and R. The vertical axis is accuracy R and the horizontal axis is average degree d. The white, the orange and the green line is the result with correct initial state, incorrect initial state and undetermined initial state. From the figure, a change of R occurs at the boundary of \(d = 6\) with incorrect initial state in AAT, while regardless of the value of d, R is stable in AATD.

Fig. 12.
figure 12

Average degree d and Accuracy R (network size = \(100h_{trg} = 0.85\)) (Color figure online)

Figure 13 indicates the relationship between \(p_{rewire}\) and R in AAT and AATD. The vertical axis is accuracy R and the horizontal axis is \(p_{rewire}\). The white, the orange and the green line is the result with correct initial state, incorrect initial state and undetermined initial state. The figure show that R is not affected by \(p_{rewire}\) both in AAT and AATD.

Fig. 13.
figure 13

\(p_{rewire}\) and Accuracy R (network size = \(100h_{trg} = 0.85\)) (Color figure online)

7 Conclusion

In this paper, we developed AATD which is extended Autonomous Adaptive Tuning Algorithm to share correct opinions in a network which all agents already form the opinions. AATD realizes opinion sharing with high accuracy regardless of initial opinion state especially when network size is small. Specifically, AATD improves the equation of OpinionFormed to return True when the agents changes their opinions from initial opinions, unlike when they determines their opinions in AAT. Also, AATD improves the equation to return True as to importance levels smaller than one to form the opinion with update number of the received opinions, unlike as to importance levels to form the opinion with update number less than the received opinions in AAT. From the experimental results, we can derive those things: (1) the agents with AATD can share the correct opinions without influence of each situation where all agents have initial some opinions or no opinions, when the network size is small (50–200); and (2) AATD is robust to different complex network topology in comparison with AAT. Our future work is to extend the assumption of small size of network for adopting to more complex network like distributed network system in reality. To tackle this issue, we are going to propose a new algorithm that estimates awareness rate without influence by not being able to receive the opinions in initial rounds.