Storage and recall capabilities of fuzzy morphological associative memories with adjunction-based learning
Introduction
(Neural) associative memories (AMs) belong to a class of artificial neural networks that are able to deduce or retrieve memorized information from possibly incomplete and corrupted data (Hassoun, 1993, Kohonen, 1989). This feature makes AMs suitable for a wide variety of applications such as discrete and combinatorial optimization (Hopfield & Tank, 1985), classification (Sussner and Valle, 2006a, Sussner and Valle, 2007, Zhang et al., 2005), biometric technologies (Zhang et al., 2004, Zhang and Zuo, 2007), image processing (Graña et al., 2009, Ritter and Urcid, in press, Valle, 2009, Valle, in press), and prediction (Marcantonio et al., 1996, Sussner et al., 2009, Sussner and Valle, 2007).
Desirable characteristics of an AM include an excellent error correction capability, i.e., tolerance with respect to noisy or incomplete input patterns, a large absolute storage capacity, a small number of spurious memories, and–in the case of dynamic AM models–fast convergence to the desired fundamental memory (Hassoun, 1993, Pao, 1989). Ever since the inception of the static linear associative memory (Hassoun, 1993, Kohonen, 1989, Pao, 1989) and the dynamic Hopfield net (Hopfield, 1982), the properties of AM models have been extensively studied by a host of researchers (Hassoun, 1993, McEliece et al., 1987, Michel and Farrell, 1990, Personnaz et al., 1985).
One of the most interesting models that has appeared in recent years is the morphological associative memory (MAM) (Ritter and Sussner, 1996a, Ritter and Sussner, 1996b, Ritter et al., 1998, Sussner and Valle, 2006a). In the auto-associative case, this model exhibits optimal absolute storage capacity and one-step convergence (Ritter et al., 1998, Sussner and Valle, 2006a). The functionality of the auto-associative morphological memory can be easily understood in terms of its fixed points (Ritter and Gader, 2006, Sussner, 2000, Sussner and Valle, 2006a).
The original MAM model can be viewed as a particular case of a broad class of fuzzy associative memory (FAM) models that have been named fuzzy morphological associative memories (FMAMs) (Sussner and Valle, 2006c, Sussner and Valle, 2007, Valle and Sussner, 2007, Valle and Sussner, 2008). FAMs were developed independently from neural AMs as a tool for implementing fuzzy rule-based systems (Kosko, 1992, Pedrycz and Gomide, 2007). The first models, introduced by Kosko, were only able to store a single association of patterns (Kosko, 1992). Models that are capable of storing multiple associations or rules include the FAM model of Junbo, Fan, and Yan (1994), the generalized FAMs of Chung and Lee (1996), the max–min FAM with threshold of Liu (1999), the fuzzy logical bidirectional associative memories (FLBAMs) of Bělohlávek (2000), and the implicative fuzzy associative memories (IFAMs) (Sussner and Valle, 2006b, Valle et al., 2004).
The mathematical background for MAMs and FMAMs can be found in (fuzzy) mathematical morphology (MM) (Deng and Heijmans, 2002, Maragos, 2005, Nachtegael and Kerre, 2001, Nachtegael et al., 2006, Sussner and Valle, 2008). A general framework for FMAMs has been presented recently (Valle & Sussner, 2008). In most cases, FMAMs either perform a maximum of conjunctions or a minimum of disjunctions. Therefore, we speak of max- FMAMs and min- FMAMs. The two types of FMAM models are related via a relationship of duality. Moreover, we have shown that the class of FMAMs encompasses many well-known FAM models (Valle and Sussner, 2007, Valle and Sussner, 2008). We also presented a learning rule for FMAMs that we have named fuzzy learning by adjunction (FLA) since it emanated from the duality concept of adjunction that plays an important role in MM (Heijmans, 1994, Valle and Sussner, 2008).
This paper investigates the properties of FMAMs with FLA. In particular, we provide an exact characterization of the output of an FMAM in terms of the input and the fundamental memories. Then, we focus on auto-associative fuzzy morphological memories (AFMMs). We prove that one can store and perfectly recall an arbitrary number of patterns in a max- AFMM if the underlying fuzzy conjunction has a left identity. If is also associative then the corresponding dynamic model of a max- AFMM converges to a fixed point in only one iteration.
The paper is organized as follows. Section 2 provides some mathematical background. Section 3 briefly reviews the basic concepts of FMAMs. In Section 4, we present general aspects of FLA including the main theorems concerning the recall phase of FMAMs and the fixed points and storage capacities of AFMMs with FLA. In Section 5, we apply these results to some particular subclasses of AFMMs. Section 6 provides some experimental results concerning the reconstruction of noisy images and the prediction of the monthly streamflow for a hydroelectric plant in southern Brazil. We finish the paper with some concluding remarks and suggestions for further research. The Appendix contains the proofs of the theorems and lemmas.
Section snippets
A brief review of mathematical morphology
Mathematical morphology (MM) is a theory that is concerned with the processing and analysis of objects using operators and functions based on topological and geometrical concepts (Heijmans, 1994, Serra, 1982, Serra, 1988, Soille, 1999). During the last few decades, it has acquired a special status within the field of image processing, pattern recognition, and computer vision. Applications of MM include image segmentation and reconstruction (Kim, 2005), feature detection (Sobania & Evans, 2005),
Basic concepts on associative memories
Associative memories (AMs) are geared to storing a finite set of pattern associations that are called set of fundamental memories (Hassoun, 1993, Kohonen, 1989, Pao, 1989). Furthermore, an AM should allow for the retrieval of a desired output upon presentation of a possibly noisy or incomplete version of a input pattern. Mathematically speaking, the AM design problem can be stated as follows: Given a finite set of associations , determine a mapping such that
A brief review on fuzzy learning by adjunction
Suppose that we want to store a set of associations in a max- FMAM given by Eq. (22). For simplicity, let and denote the matrices whose columns are respectively, the vectors and . Moreover, let us define an operator as follows: Note that if there exists a synaptic weight matrix such that then the max- FMAM produces the desired output upon presentation of the undistorted input , i.e., the FMAM
Examples of max- AFMMs with fuzzy learning by adjunction
This section provides examples that clarify some results of the previous section. We begin by introducing the class of max-AFMMs based on conjunctive uninorms and the class of max- AFMMs based on weak triangular norms (Yager, 1997, Yager and Rybalov, 1996). The former yields max- AFMM models that satisfy Theorem 11, Theorem 14 and Corollary 15, Corollary 16. In contrast, weak triangular norms are in general neither associative nor have a left identity. We conclude the section with two
Illustrations of AFMM properties using simulations in gray-scale image recognition
Consider the images of size 64×64 shown in the top row of Fig. 2. These seven images represent downsized versions of images that are contained in the database of the Computer Vision Group of the University of Granada,1 Spain. For each of these images, we generated a column vector , of length 4096 and entries in . These vectors were stored using the four max- AFMMs with FLA discussed in the previous section. In the following
Concluding remarks
In this paper, we gave an account of the properties of FMAMs with FLA. Special attention was given to the class of max- FMAMs. Recall that similar results can be deduced for the class of min- FMAMs using the duality relationship with respect to a fuzzy negation (cf. Proposition 4 and Eqs. (9), (20)).
We began by showing that the output patterns of max- FMAMs with FLA represent lattice polynomials in transformed versions of the original patterns (cf. Theorem 8). This theorem extends the
Acknowledgements
This work was supported in part by FAPESP under grant no. 2006/06818-1, CNPq under grant nos. 306040/2006-9 and 309608/2009-0 as well as Fundação Araucária under grant no. 14-1-15.197.
References (77)
- et al.
Decomposition of mappings between complete lattices by mathematical morphology, part 1. General lattices
Signal Processing
(1993) Fuzzy logical bidirectional associative memory
Information Sciences
(2000)- et al.
Fuzzy mathematical morphologies: acomparative study
Pattern Recognition
(1995) - et al.
Two lattice computing approaches for the unsupervised segmentation of hyperspectral images
Neurocomputing
(2009) The fuzzy associative memory of max–min fuzzy neural networks with threshold
Fuzzy Sets and Systems
(1999)- et al.
Connections between binary, gray-scale and fuzzy mathematical morphologies
Fuzzy Sets and Systems
(2001) Fuzzy neural networks and neurocomputations
Fuzzy Sets and Systems
(1993)Why mathematical morphology needs complete lattices
Signal Processing
(1990)- et al.
Morphological corner detector using paired triangular structuring elements
Pattern Recognition
(2005) - et al.
A general framework for fuzzy morphological associative memories
Fuzzy Sets and Systems
(2008)
On a class of weak triangular norm operators
Information Sciences
Uninorms in fuzzy systems modeling
Fuzzy Sets and Systems
Uninorm aggregation operators
Fuzzy Sets and Systems
Latent connectives in human decision making
Fuzzy Sets and Systems
Pattern recognition with fuzzy objective function algorithms
Lattice theory
Time series analysis: forecasting and control
Supremal multiscale signal analysis
SIAM Journal on Mathematical Analysis
On fuzzy associative memory with multiple-rule storage capacity
IEEE Transactions on Fuzzy Systems
Fuzzy morphology: a logical approach
Residual operators of uninorms
Soft Computing
Grey-scale morphology based on fuzzy logic
Journal of Mathematical Imaging and Vision
Learning algorithms for a class of neurofuzzy network and application
IEEE Transactions on Systems, Man and Cybernetics, Part C
Structure of uninorms
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems
Fuzzy logics based on [0, 1)-continuous uninorms
Archive for Mathematical Logic
Morphological image operators
Neural networks and physical systems with emergent collective computational abilities
Proceedings of the National Academy of Sciences
Neural computation of decisions in optimization problems
Biological Cybernetics
ANFIS: adaptive-network-based fuzzy inference system
IEEE Transactions on Systems, Man, and Cybernetics
Weakly associative functions on [0, 1] as logical connectives
Segmenting a low-depth-of-field image using morphological filters and region merging
IEEE Transactions on Image Processing
Self-organization and associative memory
Adaptive fuzzy systems for backing up a truck-and-trailer
IEEE Transactions on Neural Networks
Bidirectional associative memories
IEEE Transactions on Systems, Man, and Cybernetics
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
Cited by (41)
Two types of ordinal sums of fuzzy implications on bounded lattices
2022, Information SciencesCitation Excerpt :Fuzzy implications play a key role in fuzzy logic and various applications, including approximate reasoning [20,21,25], fuzzy control [16,23,35,46], fuzzy relational equations [15,31,33,34], fuzzy mathematical morphology [10,12,41], image processing [7,8,26] and so on.
Interval-valued fuzzy morphological associative memories: Some theoretical aspects and applications
2018, Information SciencesCitation Excerpt :The origins of lattice theory [4] can be traced back to the nineteenth and early twentieth century. More recently, lattice theory has found renewed interest due to its wide range of applicability in diverse areas such as mathematical morphology [22,23,42], fuzzy set theory [12,17], computational intelligence [24,25,41,46], automated decision making [6], and formal concept analysis [7,13]. In this paper, we take advantage of the fact that the class of all closed subintervals of [0,1] together with an appropriate partial ordering yields a complete lattice in order to devise interval-valued fuzzy associative memories.
Bidirectional and multidirectional associative memories as models in linkage analysis in data analytics: Conceptual and algorithmic developments
2018, Knowledge-Based SystemsCitation Excerpt :Central to associative memories are the issues of knowledge representation, design practices, and performance analysis and optimization. The advances in fuzzy sets and neurocomputing have resulted in the emergence of so-called fuzzy associative memories; one can refer here to [3,14,28] that highlight the main representative architectures of fuzzy associative memories. The pertinent design practices are discussed in [8,9,10,12,24,25,26,29,31] addressing issues of stability and the quality of recall as well neural network-based realizations.
A method to construct fuzzy implications–rotation construction
2018, International Journal of Approximate ReasoningDevelopment of associative memories with transformed data
2017, Applied Soft Computing Journal