Keywords

1 Introduction

The belief revision problem consists on adding new beliefs to a database (KB) that is already established, while maintaining the original beliefs without interfering with its coherence. The best-known paradigm for the belief revision is the AGM model, where authors Alchourrón, Gärdenfors and Makinson in 1985 developed a model [5] for the change in beliefs. In this model, the revision operation puts emphasis on the new information over the existing beliefs in the agent knowledge; however, it’s important to know the reliability of the source, just as quoted by Liberatore [19]. Afterwards, as indicated by Katsuno and Mendelzon [11] the different perspectives of semantic belief revision were unified and the AGM model was reformulated into the KM postulates.

From the point of view of the propositional logic, the revision of beliefs is seen as a process where reasoning in intelligent systems is the knowledge-based systems approach, which consists of maintaining knowledge in some representation language with well-defined connotation, according to Rúa and Sierra [15].

The knowledge of an agent is closed under logical consequence, denoted as \(C_n (K)\), that is when an agent believes all the logical consequences of their beliefs. These sentences are stored in a knowledge database provided with a mechanism of deductive reasoning.

The propositional implication as mentioned in [8] is an important task in problems such as the estimation of the degree of belief and the update of beliefs in the applications of Artificial Intelligence. For example, it is important when working in the planning and designing of multi-agent systems, in logical diagnosis, in the approximate reasoning and in the counting of the number of solutions for satisfiability instances, as indicated in [2, 18], among other applications.

In general, the propositional inference problem is a hard challenge in automatic reasoning and turns out to be in the class Co-NP complete [24]. Many other forms of reasoning have been shown to be hard to compute [4].

Fig. 1.
figure 1

Base model for propositional inference.

In this paper, it is applied a procedure based on falsifying patterns and the depth first search for belief revision. The basic model to determine if the new information is already present in our knowledge database is shown in Fig. 1. Thus, the objective of this research is to present an algorithm that allows determining if the new information should be added to the knowledge database or if it is not necessary as it can already be inferred.

2 Related Works

The problem of belief revision has been approached from different perspectives and different theories. Some authors consider important to pre-process the new information. For example, in [14] a study made of some of the computationally difficult problems is associated with the theory of belief revision, among them, the deduction in logical bases of knowledge is found.

It is expected that the complexity of the induction processes of NP-difficult problems can be reduced by pre-processing the input formulas. In addition, pre-processing could help determine what type of instances would unlikely be received or guided subsequently by deduction processes.

In [10] the direct relation of the problem of propositional implication between normal forms that can be reduced to the classic Co-NP-complete problem of reviewing the tautologicity of a disjunctive formula is described. Although propositional inference is a Co-NP problem in its general version, there are also different cases that can be solved efficiently.

When new information is available, the respective sets are modified differently to preserve parts of the knowledge during the review process. This approach allows for dealing with difficult and complex scenarios [16]. It is also important to know if the new information is reliable so it could save us some processing time. In [23] it is propose that when the new information comes from another agent, we must first determine if that agent should be reliable. Trust is defined as a step prior to processing the review. It is emphasized that trust in an agent is often restricted to a particular domain.

Liberatore [20] and Peppasa [21] propose to carry out the review of beliefs considering the plausibility of the information. Given a sequence of revisions and their result, a possible initial order is built according to how the sequence of revisions was generated.

It is important to note that the new information must be reliable before reviewing and updating the knowledge database, regardless of whether it is a database of one belief or multiple beliefs. As indicated in [22], the fusion of beliefs aims to combine pieces of information from different and possibly conflicting sources, in order to produce a single consistent belief database that retains as much information as possible.

In our work, we take as a basis the propositional inference to model the process of belief revision. For several authors that focus on beliefs’ revision, logic has been the main subject of study [9]. In [13] an abstract theoretical model is presented characterizing the AGM and KM postulated for different logics, among which are: Propositional Logic (PL), Horn Logic (HL), First Order Logic (FOL) and Description logic (DL) in terms of minimum change between interpretations.

In [1] it is emphasized that the representation of knowledge and reasoning using propositional logic is an important component of artificial intelligence systems. A propositional formula in conjunctive normal form can contain redundant clauses, whose elimination of the formula does not affect the set of its models. The identification of redundant clauses is important because redundancy often leads to unnecessary calculations, wasted storage and can obscure the structure of the problem. The experimental results reveal that many instances of CNF obtained from the practical applications of propositional satisfiability problem (SAT) exhibit a high degree of redundancy.

Similarly, in [7] an algorithm based on decomposition is proposed for revision problems presented in the classic propositional logic. A set of decomposed rules are presented to analyze formulas’ satisfiability considering a set of literals. A decomposition function is constructed to calculate all the satisfactory literal sets of a given formula since these will conforming to the models that satisfy the formula.

In [3], a general family of belief operators in a propositional environment is studied. Its operators are based on the formula/literal dependency, which is more refined than the notion of formula/variable dependency.

Several efforts have been made using mathematical logic to solve the problem of revision of beliefs [12, 17]. Our proposal is linked to the work reported in [10], where an operator is constructed using propositional logic. In this new version of the belief revision operator, a depth first search strategy is used in order to improve the complexity of the algorithm proposed in [10]. In addition, our algorithm considers the trust of the new information \(\phi \) with a higher priority with respect to the information stored in the knowledge base, because the new information prevails in dynamic processes.

3 Preliminaries

Let \(X = {x_1, \ldots , x_n}\) be a set of n Boolean variables. A literal denoted as lit is, a variable \(x_i\) or a denied variable \(\lnot x_i\). As usual, each \(x \in X, x^0 = \lnot x\) and \( x^1 = x\). A clause is a disjunction of different literals. For k \(\in \) N, a k-clause is a clause with exactly k literals, and \((\le k)\)-clause is a clause with at most k literals. Sometimes, we consider a clause as a set of literals.

A phrase is a conjunction of literals. A k-phrase is a phrase with exactly k literals. A variable \(x \in X\) appears in a clause C (or phrase) if x or \(\lnot x\) is an element of C.

A conjunctive normal form (CNF) is a conjunction of clauses, and k-CNF is a CNF containing only k-clauses. A disjunctive normal form (DNF) is a disjunction of sentences, and k-DNF is a DNF containing only k-phrases. A CNF F with n variables is a n-ary Boolean function F:\(\{0, 1\}^n\) \(\rightarrow \) {0, 1}. Rather, any Boolean function F has infinitely many equivalent representations, among these, some in CNF and DNF.

An assignment s for a formula F is a Boolean mapping s : v(F) \(\rightarrow \{1, 0\}\). An assignment s can also be considered as a non-complementary set of literals: \(l \in s\) if and only if s assigns true to l and \(\lnot l\) false. s is a partial assignment for the formula F when s has determined a logical value only to variables of a proper subset of F, namely \(s: Y \rightarrow \{1, 0\}\) and \(Y\subset v(F)\). A clause C is satisfied by an assignment s if \(Lit(C) \subset Lit(s)\). Otherwise, if for all literal l in C it holds that \(\lnot l \in s\) then C is falsified by s. A CNF F is satisfied by an assignment s if each clause F is satisfied by s. A model of F is an assignment on v(F) satisfying F. A CNF F is falsified by an assignment s, if it exists a clause of F that is falsified by s. Mod(F) and Fals(F) denote the set of models and falsifying assignments of the formula F.

Two independent clauses \(C_i\) and \(C_j\) have complementary pair of literals, therefore their falsifying assignments must also have complementary literals, that is, \(Fals(C_i) \cap Fals(C_j) = \emptyset \).

Given two falsifying strings A and B each of length n, if there is an \(i\in [0,n]\) such that \(A[i]=x\) and \(B[i]=1-x\), \(x\in \{0,1\}\), it is said that they have the independence property. Otherwise, we say that both strings are dependent.

Given a pair of dependent clauses \(C_1\) and \(C_2\), if \(Lit(C_1)\subseteq Lit(C_2)\) we affirm that \(C_2\) is subsumed by \(C_1\).

The propositional logic as indicated in the book [6] is used to analyze formally valid reasonings, starting from propositions and logical operators in order to construct formulas operating on the propositional variables.

4 Inference Algorithm with Falsifying Patterns

In the search to reduce the required amount of computational resources in the belief revision process, this new algorithm is presented based on the depth first search. The algorithm reported in [10] it is based on a table of |K| x \(| \phi |\) that is adjusted dynamically depending on the number of new clauses to be generated. In this way, the table is scanned by row to row. While the new algorithm traverses the nodes of a tree in depth, but with a limited depth value |k|, this avoids the risk of being indefinitely interned in a non-finite branch, as well as the need to save all the expanded nodes.

Let K be the Knowledge Base and let \(\phi \) be the new information, we say that K semantics inferences implies \(\phi \), written as \(K \models \phi \), if \(\phi \) is satisfied for each model (Mod) of K, i.e., if \( Mod(K) \subseteq Mod(\phi ).\) The belief revision seen as propositional inference have the following facts:

  • \(K \models \phi \) iff Mod(K) \(\subseteq \) Mod (\(\phi \))

  • \(K \models \phi \) iff Fals(\(\phi \)) \(\subseteq \) Fals (K)

Thus Fals() represents the truth values inverted for each literal in each clause K and \(\phi \). Given a clause \(C_i=(x_{i_1} \vee ... \vee x_{i_k})\), then the value at each position from \(i_1\)-th to \(i_k\)-th of the string \(A_i\) is fixed with the true value falsifiying each literal of \(C_i\). E.g., if \(x_i \in C_i \), the \(i_j\)-th element of \(A_i\) is set to 0. On the otre hand, if \(\lnot x_i \in C_i\), then the \(i_j\)-th element is set to 1. The literal which do not appear in \(C_i\) are represented by the symbol *, meaning that they could take any logical value in the set 0,1. We call falsifying patterns to the string \(A_i\) representing the set of falsifying assignments of a clause \(C_i\).

Fig. 2.
figure 2

Transforming by falsifying patterns

Let a KB \(K = \bigwedge _{(j=1)}^{m}{C_j}\) and \(\phi = \bigwedge _{(i=1)}^{k}{\phi _i}\) where each \(C_j \in K\) and each \(\phi _i \in \phi \) are expressed clauses in the same set of n boolean variables. The clauses \((\lnot p \vee q \vee \lnot s) \) and \( ( q \vee r \vee \lnot s \vee t) \) are transformed using falsifying patterns as shown in the Fig. 2 considering the space of variables: pqrst.

The main procedure for the belief revision process between the new information \(\phi \) and the knowledge base K returns a set S of clauses deduced from \(\phi \). This procedure contemplates the following subprocesses:

  • Pre(FKS): This function performs a pre-processing of the knowledge base K with respect to each of the clauses of the new information \(\phi \), in order to eliminate subsumed and independent clauses.

  • sort(KF): sort K based on the differences with Fi using the function Dif(FC) which computes the difference \({\mid } Lit(F_i) - Lit(C_i) {\mid }\) between the literals of a clause \(\phi \) and a clause of the Knowledge Base K.

  • \(DFS(\phi , K, S)\): This function (Algorithm 1) allows the generation of the depth first search tree for each clause in \(\phi \).

Fig. 3.
figure 3

Operations between clauses considered in the development of the Algorithm

Figure 3 describes the main operations that are applied in the general procedure as well as in Algorithm 1. The set S of clauses generated by the proposed algorithm contains exactly the necessary clauses that allow to infer each \(\phi _i \in \phi \). When applying depth first search strategy for all \(C_j \in K \), we obtain that \( K \models \phi \), and in this way allows to cover the space \( Fals(\phi _i) - Fals(C_j)\), which is the minimum space of assignments for \(Fals(\phi ) \subseteq Fals(K)\) [10].

figure a

5 Results

The following is the result of applying the general procedure with the depth first search described in Algorithm 1 using falsifying patterns.

Example 1. Let K = {\((r \vee s \vee \lnot u\vee \lnot v) \wedge ( r \vee \lnot s \vee \lnot u \vee v \vee \lnot w) \wedge ( \lnot q \vee \lnot s \vee \lnot t \vee u \vee \lnot w) \wedge ( \lnot q \vee \lnot r \vee \lnot s \vee \lnot w) \wedge ( \lnot q \vee r \vee t \vee u \vee \lnot w) \wedge ( p \vee q \vee r \vee w)\)} and \(\phi = \{ (\lnot q \vee \lnot s \vee \lnot v \vee \lnot w) \wedge (\lnot w) \wedge ( r \vee s \vee \lnot u \vee \lnot v \vee w) \}\).

Expressing the above formulas via falsifying patterns: \(K=\{C_1, C_2, C_3, C_4, C_5, C_6\} = \{(**00*11*), (**01*101), (*1*110*1), (*111***1), (*10*00*1), (000****0) \}\) and \(\phi =\{ (**00*110), (*******1), (*1*1**11)\}\). In Fig. 4, the complete processing tree is shown.

Table 1. Algorithms processing results

Table 1 shows some results of the execution of the algorithm reported in [10] against the new algorithm proposed in this paper. In each of the cases it is noted that the new algorithm performs less processing to determine the process of interference, which is due to the use of the depth first search strategy.

Fig. 4.
figure 4

Generating the Depth First Search tree by \(DFS\_BR()\)

  • The main process starts applying the procedure Prep(). In this step is determined that \(\phi _1\) is a clause subsumed by \(C_1\) because its difference is zero.

  • The process restarts considering the clause \(\phi _2\). The pre-processing determines that \(C_6\) is deleted because it is independent with \(\phi _2\).

  • The knowledge base K is sorted and it is based on the difference (Dif()) with \(\phi _2\), obtaining the base \(K =\{C_4, C_1, C_2, C_3, C_4, C_5 \}\).

  • The reduced knowledge base K is sorted by subtracting with \(\phi _1\), resulting \(K=\{C_4, C_3, C_5\}\).

  • The recursive process (Algorithm 1) starts by generating the center branch in the Fig. 4, at each edge, it is marked the clause that is applied. The dashed line shows which clause was subsumed.

  • The procedure continues considering the clause \(\phi _3\), and by generating the branch that is shown on the right of the tree. In this case, the clauses \(C_1, C_2\) and \(C_6\) were deleted from K because they are independent clauses with \(\phi _3\).

  • The pre-processing finishes when all clauses are evaluated from \(\phi \) and the trees’ leaves represent the final clauses that will form S. For this example, we have that \((K \wedge S)=\{(*01****1), (*001*0*1), (*001*111), (*000*0*1), (*000*101), (*110***1), (*101*111), (*100*101), (*10010*1), (*1010111) \}\).

The time function for the algorithm that determines \((K \models \phi )\) will be denoted by \(T_0(|\phi |, |K|)\). This process depends mainly on the runtime of the procedure \(DFS(\phi , K, S)\) which is obtained from a recursive computation of \(Gen(\varphi _i, C, n)\), with \(\varphi _i \in \phi , C \in K\) and \( n=|Dif(\varphi _i,C)|\).

The time complexity for the \(DFS(\varphi _i, K, S)\) process is of order \(O(|K| \cdot n \cdot f(|\varphi _i|,|K|))\), where \(f(|\varphi _i|,|K|)\) is an integer function, that given a clause \(\varphi _i\) and a CF K, counts the number of clauses to be formed by \(Gen(\varphi _i, C, n)\).

Analyzing the possible maximum number of clauses that can be generated through \(Gen(\varphi _i, C, n)\). In some cases, \(Gen(\varphi _i, C, n)\) generates the empty set (when \(n=0\), because \(C \models \varphi _i)\). However, in the worst case, the time complexity of calculating \(Gen(\varphi _i, C, n)\) depends on the length of the sets: \(Dif(\varphi _i,C_j) = S_{ij} = \{x_1, x_2, \ldots , x_p\} = Lit(C_j) - Lit(\varphi _i)\).

As it was noted previously, given \(\varphi _i \in \phi \), it is relevant to sort the clauses \(C_j\) \(\in K\) according to the cardinality of the sets \(Dif(\varphi _i,C)\) from lowest to highest, and eliminating the clauses that are independent with \(\varphi _i\). When there is no independent clauses with \(\varphi _i\), the time complexity for calculating \(DFS(\varphi _i, K, S)\) is bounded by the number of resulting clauses. In other words, \({\mid } DFS(\varphi _i, K, S) {\mid } \ \le \ {\mid } S_{i1} {\mid }\,*\,{\mid } S_{i2} {\mid }\,*\,\ldots \,*\,{\mid } S_{is} {\mid }\,*\,Poly(n,m), s \le m\). Where Poly(nm) summarizes a polynomial time due to the matching strings process and the sorting of the clauses in K.

Then, we infer an upper bound for the function \(f(|\varphi _i|,|K|)\), given by

$$\begin{aligned} Max\{{\mid } S_{i1} {\mid }\,*\,\mid S_{i2} {\mid }\,*\,\ldots \,*\,{\mid } S_{is} {\mid } : \forall \varphi _i\in \phi \} \end{aligned}$$
(1)

Furthermore, \(Dif(\varphi _i,C)\) arranges logical values for a set of variables that do not change their values during the process \(DFS(\varphi _i, K, S)\), and then \({\mid } S_{i1} {\mid } + \mid S_{i2} {\mid } + \ldots + {\mid } S_{is}{\mid } \le n - {\mid } \varphi _i {\mid }\).

6 Conclusions

The belief revision allows modeling a very general aspect of human reasoning about how we store our knowledge by keeping the new information. One of the mechanisms we make of rational way is to apply inference, i.e., if the new information can be inferred from some prior knowledge then it is no longer necessary to save it.

It is proposed a method that works on the set of falsifying assignments of the involved formulas, in order to review: \(K \models \phi \).

It is presented a method for belief revision using conjunctive normal forms. Since K and \(\phi \) are CNF’s, the belief revision process between K and \(\phi \) (K infer \(\phi \)) is reduced to make the revision between each \(\phi _i \in \phi \), and each \(C_i \in K\).

A logical algorithm is proposed and it is based on the depth first search in order to obtain a set S of clauses whose falsifying assignments cover the space \(Fals(S) = Fals(\phi _i) - Fals(C_j)\), and in this way, we can be determine if \(K \models \phi \).

In general we assume that the knowledge base K represented in FNC is satisfiable, that is, there is a set of models that satisfy K. When new information \(\phi \) is added to carry out the belief revision process, in fact we are eliminating models. Thus, K and \(\phi \) could become in an unsatisfiable knowledge base. The revision of the consistency of the dinamic knowledge bases in terms of falsifying patterns is considered as a future work.