Elsevier

Information Fusion

Volume 77, January 2022, Pages 118-132
Information Fusion

A Linguistic Information Granulation Model and Its Penalty Function-Based Co-Evolutionary PSO Solution Approach for Supporting GDM with Distributed Linguistic Preference Relations

https://doi.org/10.1016/j.inffus.2021.07.017Get rights and content

Highlights

  • Propose a linguistic information granulation model.

  • Develop a penalty function-based co-evolutionary PSO (PFCPSO) solution approach.

  • Present the whole algorithm framework for the PFCPSO solution approach.

  • Discuss how the granulation model and the PFCPSO solution approach work in practice.

  • Compare PFCPSO and co-evolutionary PSO in terms of the performance of the solution.

Abstract

This study focuses on linguistic information operational realization through information granulation in group decision-making (GDM) scenarios where the preference information offered by decision-makers over alternatives is described using distributed linguistic preference relations (DLPRs). First, an information granulation model is proposed to arrive at the operational realization of linguistic information in the GDM with DLPRs. The information granulation is formulated as a certain optimization problem where a combination of consistency degree of individual DLPRs and consensus degree among individuals is regarded as the underlying performance index. Then, considering that the proposed model is a constrained optimization problem (COP) with an adjustable parameter, which is difficult to be effectively solved using general optimization methods, we develop a novel approach towards achieving the optimal solution, referred to as penalty function-based co-evolutionary particle swarm optimization (PFCPSO). Within the PFCPSO setting, the designed penalty function is used to transform the COPs into unconstrained ones. Besides, the penalty factors and the adjustable parameter, as well as the decision variables of the optimization problems, are simultaneously optimized through the co-evolutionary mechanism of two populations in co-evolutionary particle swarm optimization (CPSO). Finally, a comprehensive evaluation problem about car brands is studied using the proposed model and the newly developed PFCPSO approach, which demonstrates their applicability. Two comparative studies are also conducted to show the effectiveness of the proposals. Overall, this study exhibits two facets of originality: the presentation of the linguistic information granulation model, and the development of the PFCPSO approach for solving the proposed model.

Introduction

Decision-making problems are usually conducted through group decision-making (GDM) processes [5,11,26,30,32,37,44]. When addressing GDM problems, preference relations show sound performance when representing the outcomes of pairwise comparisons coming from decision-makers (DMs) [36]. In most real-world GDM scenarios, DMs cannot accurately express their preferences through numerical values given the complexity and uncertainty of problems themselves and the ambiguity inherent to human thinking. In such scenarios, linguistic variables [55], whose values are words instead of numbers, provide a flexible tool for DMs to express their preferences. As such, various linguistic expressions have been proposed to support GDM over the past decades, such as the linguistic 2-tuples [33], the hesitant fuzzy linguistic term sets (HFLTSs) [5], the probabilistic linguistic term sets (PLTSs) [39], and the linguistic distributions (LDs) [50,56,57]. Compared with linguistic 2-tuples, the latter three linguistic expressions (HFLTSs, PLTSs, and LDs) improve the flexibility of expression by allowing DMs to use multiple linguistic terms instead of a single one to express their preferences. Specifically, PLTSs and LDs are two different names for a similar concept [39], and HFLTSs quantitatively characterize the hesitant preferences of DMs by using several consecutive linguistic terms. Different from HFLTSs, certain symbolic proportion information over consecutive linguistic terms is provided by LDs to describe distributed preferences of DMs as distributed assessments. In recent years, based on the concepts of HFLTSs and LDs, the proportional HFLTSs [9] and the proportional interval type-2 HFLTSs [10] were proposed.

When such linguistic expressions as the HFLTSs and PLTSs are applied to record the outcomes of pairwise comparisons, the linguistic term-based preference relations, namely, the hesitant fuzzy linguistic preference relations [60] and the probabilistic linguistic preference relations [59] have been introduced. Specifically, Zhang et al. [57] proposed the distributed linguistic preference relations (DLPRs) with the use of LDs. It is evident that DLPRs are suitable for modeling the uncertainty and complexity of preference information coming from DMs in complex linguistic decision-making problems [[45], [50]]. DLPRs not only allows DMs to express their preference information using multiple linguistic terms, but also reflects the importance degrees or different proportions of the used linguistic terms. For example, suppose that a teacher uses H= {h1 = very poor, h2 = poor, h3 = equal, h4 = good, h5 = very good} to evaluate the comprehensive academic ability of two students x1 and x2. The teacher evaluates the two students in five aspects, and conducts one test for each aspect. When all the tests are completed, compared with the performance of x2, the performance of x1 is judged as h2 in two tests, it is judged as h3 in two tests, and h4 in one test. Then, the comparison results of the two students in the five aspects can be described as a LD, that is, {(h2,0.4),(h3,0.4),(h4,0.2)}. In this study, we will also continue to focus on the GDM with DLPRs to enrich linguistic decision information management through information granulation [41].

When addressing linguistic decision problems, since linguistic information itself is not operational, the issue of how to make linguistic information operational is usually encountered. For this reason, the notion of computing with words (CWW) [26,33] has attracted much attention from researchers, and various linguistic computational models [6,13,15,16,33,55] within the formal mathematical framework of CWW have been developed for supporting operations on linguistic information over the past few decades, say the linguistic hierarchy model [6], the 2-tuple linguistic representation model [13,33], a model based on membership functions [55], a model based on fuzzy numbers [16], and a model based on ordinal scales [15]. In these linguistic computation models, both the distribution and the semantics of the given linguistic terms are established a priori [3,4]. Recent studies [3,4,41,58] have achieved the operational realization of linguistic information through information granulation with the help of the paradigm of information processing in granular computing [40]. Information granulation is a process of data abstraction and the derivation of knowledge from information, in which the information granules (e.g., fuzzy sets, rough sets, shadowed sets, and intervals) arise [40]. The process of the operational realization of linguistic information is consistent with information granulation. Under the information granulation framework, the question of how to achieve the operational version of linguistic information is commonly formulated as certain optimization problems, where the criteria of consistency and consensus, or their weighted averaging, are usually regarded as suitable optimization performance indices [3,4,58]. In these studies, the given linguistic terms’ distribution and semantics are optimized instead of being established a priori. It is evident that a great number of studies have been reported to enrich the field of linguistic information granulation in the context of GDM with linguistic information [3,4,58]. To the best of our knowledge, no studies introduce the information processing paradigm of granular computing to address the operational realization of linguistic information in GDM scenarios where DMs’ preference information over alternatives is described by means of DLPRs. Hence, the first objective of this study is to fill this gap by proposing a linguistic information granulation model for supporting GDM with DLPRs.

The proposed linguistic information granulation model is considered to be a constrained optimization problem (COP) [2] with some adjustable parameters. A common way of solving COPs is to convert them into unconstrained optimization problems (UCOPs) by first setting certain penalty functions [38]. After that, the unconstrained ones can be solved by using common optimization methods, such as particle swarm optimization (PSO) [25,43], genetic algorithm [48], and so on. However, the use of penalty functions induces some penalty factors whose values are hard to be determined. The settings for the penalty factors are subjective and empirical in some way [34]. Thus, the feasible domain associated with the UCOPs cannot be effectively approached when using those common optimization methods. Subsequently, evolutionary computing technology [14] has been introduced to deal with COPs [1,19,51,52]. Specifically, He and Wang proposed an effective co-evolutionary particle swarm optimization (CPSO) method [19]. With the CPSO, penalty factors can be adjusted adaptively by employing the notion of a co-evolutionary mechanism, which overcomes the problem of subjectively determining penalty factors. It should be noted that the CPSO loses effectiveness when encountering COPs with adjustable parameters. Developing a novel approach for solving such COPs with adjustable parameters as the proposed linguistic information granulation model is the second objective of this study.

To achieve the two above-mentioned objectives, this study consists of the following two parts:

  • a)

    First, an information granulation model based on two optimization criteria, that is, consistency and consensus, is proposed to arrive at the operational realization of linguistic information in the context of GDM with DLPRs. The linguistic information granulation in this study is formulated as a certain optimization problem where a combination of the consistency degree of individual DLPRs and consensus degree among individuals is regarded as its performance index. Considering that intervals are a commonly used form of information granules in information granulation, the granulation formalism under consideration also concerns this formal scheme.

  • b)

    Then, inspired by the idea of the co-evolutionary mechanism adopted in the CPSO, a novel solution approach, called penalty function-based co-evolutionary particle swarm optimization (PFCPSO), is developed with the combined use of penalty functions and CPSO for effectively addressing such COPs with adjustable parameters as the proposed information granulation model. Within the framework of the proposed PFCPSO approach, the penalty functions are used to transform COPs into UCOPs; and the penalty factors of the penalty functions and the adjustable parameters as well as the decision variables of the optimization problems at hand are simultaneously optimized through the co-evolutionary mechanism of two populations in CPSO.

In summary, there are mainly two unique contributions of this study. First is the presentation of the consistency and consensus-driven information granulation model for achieving the operational realization of linguistic information in the context of GDM with DLPRs. Both the distribution and the semantics of the given linguistic terms can be obtained by solving the information granulation model. The second is the development of the PFCPSO approach for coping with such COPs with adjustable parameters as the proposed linguistic information granulation model.

The remainder of this paper is organized as follows: In Section 2, some prerequisites are covered. Section 3 presents the consistency and consensus-driven information granulation model in the context of GDM with DLPRs. In Section 4, the original PFCPSO approach is introduced to cope with the proposed information granulation model. Section 5 discusses the applications of the proposals discussed in Sections 3 and 4. Finally, Section 6 concludes the study with suggestions for future work in this area.

Section snippets

Basic knowledge

Some basic concepts, such as fuzzy preference relations (FPRs), DLPRs, 2-tuple linguistic model and numerical scale model are briefly introduced in this section.

Consistency and consensus-driven information granulation model

Compared with numerical preference relations, preference relations expressed by linguistic variables provide DMs with more convenient and accurate ways to depict their preference information over alternatives. However, linguistic terms themselves are not operational, and thus they need to be made operational to solve linguistic decision problems. The problem of making linguistic information operational is still an open issue when solving GDM with DLPRs. A granulation of linguistic information

Proposed PFCPSO approach for solving information granulation model

In this section, inspired by the idea of the co-evolutionary mechanism adopted in the CPSO [19], a novel solution approach, called PFCPSO, is developed with a combined use of penalty functions and CPSO for addressing the proposed COP. To prepare for the presentation of the PFCPSO, the basic model of CPSO and its co-evolutionary mechanism are briefly introduced first.

Illustrative example

In this section, a comprehensive evaluation problem is analyzed using the proposals to demonstrate their applicability and validity. Through this illustrative example, the implementation process of the proposals is introduced in detail. Moreover, two comparative studies are investigated to discuss the effectiveness of the proposals.

Conclusions and future work

With its promising performance of DLPRs in expressing DMs’ preference information, linguistic GDMs of this type of preference relations have attracted considerable attention over the past years. However, the problem of making linguistic information operational in such linguistic decision scenarios is still an open issue. To achieve the operational realization of the granules of linguistic information in GDM with DLPRs, this study developed an information granulation model based on the criteria

Author Contributions

Qiang Zhang: Conceptualization, Writing - Original Draft, Funding acquisition;

Ting Huang: Software, Investigation, Methodology;

Xiaoan Tang: Writing - Review & Editing, Methodology;

Kaijie Xu: Validation;

Witold Pedrycz: Language improvement, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

This research was supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (No. 71521001), the National Natural Science Foundation of China (Nos. 71501055, 71690230, 71690235, 72071063, and 72071060), and the Fundamental Research Funds for the Central Universities (Nos. JZ2020HGQA0168 and PA2021KCPY0030).

References (63)

  • D. Gu et al.

    A case-based reasoning system based on weighted heterogeneous value distance metric for breast cancer diagnosis

    Artif. Intell. Med.

    (2017)
  • Q. He et al.

    An effective co-evolutionary particle swarm optimization for constrained engineering design problems

    Eng. Appl. Artif. Intell.

    (2007)
  • F. Herrera et al.

    A model of consensus in group decision making under linguistic assessments

    Fuzzy Sets Syst

    (1996)
  • E. Herrera-Viedma et al.

    A review of soft consensus models in a fuzzy environment

    Inf. Fusion.

    (2014)
  • E. Herrera-Viedma et al.

    Some issues on consistency of fuzzy preference relations

    Eur. J. Oper. Res.

    (2004)
  • C. Li et al.

    Personalized individual semantics in computing with words for supporting linguistic group decision making. An application on consensus reaching

    Inf. Fusion.

    (2017)
  • C. Li et al.

    Personalized individual semantics based on consistency in hesitant linguistic group decision making with comparative linguistic expressions

    Knowl.-Based Syst

    (2018)
  • E. Mezura-Montes et al.

    Constraint-handling in nature-inspired numerical optimization: Past, present and future

    Swarm Evol. Comput.

    (2011)
  • J.A. Morente-Molinera et al.

    A novel multi-criteria group decision-making method for heterogeneous and dynamic contexts using multi-granular fuzzy linguistic modelling and consensus measures

    Inf. Fusion.

    (2020)
  • H. Nurmi

    Approaches to collective decision making with fuzzy preference relations

    Fuzzy Sets Syst

    (1981)
  • A. Panda et al.

    A Symbiotic Organisms Search algorithm with adaptive penalty function to solve multi-objective constrained optimization problems

    Appl. Soft Comput.

    (2016)
  • Q. Pang et al.

    Probabilistic linguistic term sets in multi-attribute group decision making

    Inf. Sci.

    (2016)
  • W. Pedrycz et al.

    A granulation of linguistic information in AHP decision-making problems

    Inf. Fusion.

    (2014)
  • X. Tang et al.

    Consistency and consensus-driven models to personalize individual semantics of linguistic terms for supporting group decision making with distribution linguistic preference relations

    Knowl.-Based Syst

    (2020)
  • X. Tang et al.

    Distribution linguistic preference relations with incomplete symbolic proportions for group decision making

    Appl. Soft Comput.

    (2020)
  • T. Tanino

    Fuzzy preference orderings in group decision making

    Fuzzy Sets Syst

    (1984)
  • Y. Wu et al.

    Distributed linguistic representations in decision making: Taxonomy, key elements and applications, and challenges in data science and explainable artificial intelligence

    Inf. Fusion.

    (2021)
  • Z. Yang et al.

    Surrogate-assisted classification-collaboration differential evolution for expensive constrained optimization problems

    Inf. Sci.

    (2020)
  • L.A. Zadeh

    Toward a perception-based theory of probabilistic reasoning with imprecise probabilities

    J. Stat. Plan. Inference.

    (2002)
  • L.A. Zadeh

    Fuzzy sets

    Inf. Control.

    (1965)
  • L.A. Zadeh

    The concept of a linguistic variable and its application to approximate reasoning.1

    Inform. Sci.

    (1975)
  • Cited by (11)

    • Local minimum adjustment for the consensus model with distribution linguistic preference relations considering preference reliability

      2023, Information Fusion
      Citation Excerpt :

      In a linguistic GDM, distributed linguistic representations are more flexible and powerful tools for modeling the uncertainty and complexity of preference information coming from DMs [27,28]. According to the idea of the taxonomy of distributed linguistic representations in [27], several distribution linguistic preference relations (DLPRs) including linguistic distribution preference relation (LDPR) [29,30], probabilistic linguistic preference relation (PLPR) [31–34], and flexible linguistic preference relation (FLPR) [35] have been investigated based on the use of distributed linguistic representation in the pairwise comparison method. In GDM with DLPRs, Zhang et al. [29] defined the additive consistency and the multiplicative consistency of DLPRs based on the expectation of DLPRs.

    • A linguistic information granulation model based on best-worst method in decision making problems

      2023, Information Fusion
      Citation Excerpt :

      For instance, the granular linguistic information can be qualified in the form of intervals [11,15], fuzzy sets [10,12], and other forms [19,20]. In the current granulation of linguistic terms models [15,17,21], researchers mainly concentrate on the interval form of information granules, while other formalisms are seldom considered. Based on this, two forms of information granules are considered: the classical intervals-based granules and the interval type-2 fuzzy sets (IT2FSs)-based granules to qualify the linguistic terms, and then proceed to the granulation process.

    • Linguistic information-based granular computing based on a tournament selection operator-guided PSO for supporting multi-attribute group decision-making with distributed linguistic preference relations

      2022, Information Sciences
      Citation Excerpt :

      To solve CNLOPs more effectively, a combination of the PSO and penalty functions is usually used. Subsequently, given that the penalty factors of penalty functions and trade-off factors of objective functions are difficult to determine, He and Wang [14] and Zhang et al. [45] introduced the co-evolutionary mechanism into the PSO. The use of the co-evolutionary mechanism contributes to the adaptive adjustment of the penalty and trade-off factors, but it greatly increases the complexity of the co-evolutionary PSOs.

    • A two-stage granular consensus model for minimum adjustment and minimum cost under Pythagorean fuzzy linguistic information

      2022, Applied Soft Computing
      Citation Excerpt :

      It expresses linguistic terms in the form of information granules instead of a single numerical value, which allows a certain level of adjustment for increasing the consensus among the individual experts [42,43]. Within the framework of granular computing, the distribution and the operation of linguistic terms are often regarded as an optimization problem, where the consistency and consensus index is often considered as optimization indicators [44,45]. It conduces to translate linguistic terms into meaningful interval information granules to achieve the highest performance index and can effectively deal with granulation problems [42–44].

    View all citing articles on Scopus
    View full text