1 Introduction

Three-way decision-making is a decision model that aligns with the human cognitive process of “divide and govern.” It offers a reasonable solution to handling uncertain decision-making problems. This approach takes into account both the uncertainties and the costs associated with the decision process, reflecting human cognitive processes and decision-making habits (Hu and Yao 2019; Liang and Liu 2015; Zhang et al. 2020). Therefore, it is a topic worthy of in-depth research. The three-way decision model was initially proposed by Yao et al. based on rough set theory. It expands the traditional two-way decision framework into three strategies: acceptance, rejection, and deferment. By introducing the intermediate state of “deferment”, this model provides more flexibility for decision-makers, allowing them to postpone their final decision in situations characterized by insufficient or ambiguous information (Yao 2011). This decision-making approach is effective in addressing complex problems, which therefore has found widespread applications in areas such as classification, risk assessment, and information retrieval (Liang et al. 2017, 2018).

Rough set theory, proposed by Pawlak in 1982, serves as an important tool for addressing uncertainty issues. It processes uncertain information through upper and lower approximations (Pawlak 1998). While Pawlak’s rough set approach is effective for handling precise data, it exhibits limitations when dealing with fuzzy an uncertain information. To address the limitations of traditional rough set models, scholars have developed extensions such as neighborhood rough sets, covering rough sets, and variable precision rough sets. Meanwhile, extensions focusing on fuzzy and incomplete data management have emerged (Das et al. 2024; Das and Granados 2021; Das 2016; Das and Granados 2023), showing higher accuracy in decision-making with conflicting criteria (Božanić et al. 2024; Gul 2024). Furthermore, rough sets and their extensions are widely used in multi-criteria decision-making, including sustainability in tourism and autonomous vehicle acceptance (Kousar and Kausar 2024; Song et al. 2024).

Intuitionistic fuzzy sets, an extension of fuzzy sets, were introduced by Atanassov in 1986. Intuitionistic fuzzy sets describe imprecise problems from three dimensions: membership degree, non-membership degree, and hesitation degree (Atanassov 1986). Compared with Zadeh’s fuzzy sets, intuitionistic fuzzy sets provide a more detailed representation of an object’s membership status concerning a certain concept, along with the associated uncertainty. This extension enables intuitionistic fuzzy sets to address more intricate decision-making problems, making them particularly suitable for multi-attribute decision-making and classification tasks (Huang et al. 2014; Zhang 2012; Sun et al. 2020a, b). Moreover, the integration of intuitionistic fuzzy sets for group decision-making has been demonstrated, effectively handling membership and hesitation degrees (Das and Granados 2022a, b). However, the complex problems in the objective world present diverse challenges, and certain multi-attribute decisions may be influenced by multiple factors. In such cases, it is difficult to effectively represent the corresponding decision information with membership degree and non-membership degree only. To address this issue, the concept of support intuitionistic fuzzy sets was proposed (Nguyen 2015). Support intuitionistic fuzzy sets incorporate membership degree, non-membership degree, and support degree as three distinctive features, serving as an extension of intuitionistic fuzzy sets to describe the support of various factors to membership degrees during the decision-making process. Therefore, support intuitionistic fuzzy sets represent a kind of fuzzy set extension model worthy of further study.

Intuitionistic fuzzy rough sets, as a product of the integration of intuitionistic fuzzy sets and rough sets, have demonstrated effectiveness in addressing complex decision-making problems. Compared to the traditional rough set models, which define upper and lower approximations based on equivalence relations, intuitionistic fuzzy rough sets incorporate the upper and lower approximations of membership, non-membership, and hesitation degrees. This allows for a more granular and comprehensive representation of uncertainty when calculating decision boundaries (Liang et al. 2017; Xue et al. 2020). However, they still exhibit limitations when it comes to handling multi-granular decision problems, requiring further optimization and improvement. Multi-granular rough sets, as a novel tool for addressing uncertainty, introduce multiple granularity layers to describe and analyze information, allowing decision models to consider different granular perspectives (Liu 2010; Tan et al. 2019). The introduction of multi-granular rough sets effectively addresses the potential oversight of details that can occur in single granularity contexts, thereby enhancing the model’s capability to handle complex information. Studying multi-granular rough sets sets within the framework of support intuitionistic fuzzy sets can significantly improve the decision efficiency of the models. In related research, Xue et al., based on multi-granular rough set theory, explored multi-attribute decision problems involving information conflict from the perspective of support intuitionistic fuzzy sets (Xue et al. 2020).

Overlap functions and grouping functions, as a specific class of non-distributive aggregation functions, were proposed by Bustince et al. in 2009 and 2012, respectively (Bustince et al. 2010; Bedregal et al. 2013). These functions have emerged as a focal point of research in the field of aggregation functions. In recent years, the construction of various types of rough set models based on overlap functions and grouping functions has become an important research direction in rough set theory. For example, in 2021, Qiao established an (IO)-fuzzy rough set using overlap functions and their induced residual implications (Qiao 2021). In 2022, Jiang and Hu proposed the (OG)-fuzzy rough set model utilizing overlap and grouping functions over complete lattices, and explored its properties (Jiang and Hu 2022). That same year, Zhang introduced a multi-granular fuzzy rough set model based on overlap functions and discussed a novel approach to addressing multi-attribute group decision-making problems (Zhang et al. 2023). In 2023, Qiao et al. presented a fuzzy probabilistic rough set model based on overlap functions and discussed its properties and applications (Han et al. 2024). Additionally, Zhang introduced a variable precision fuzzy rough set model based on overlap functions, applying it to tumor classification problems and enhancing classification accuracy (Zhang et al. 2024).

Traditional fuzzy rough sets, hesitant fuzzy sets, and probabilistic rough sets have been widely applied to decision-making problems, however, each framework exhibits inherent limitations. Fuzzy rough sets struggle with multi-granular perspectives (Pawlak 1998; Sun et al. 2017), hesitant fuzzy sets effectively aggregate conflicting preferences but lack mechanisms for managing attribute interdependencies (Xu 2007), and probabilistic rough sets excel in uncertainty quantification but are less adaptable to multi-attribute scenarios (Yao 2011). To address these challenges, this study proposes a multi-granular support intuitionistic fuzzy rough set, which integrates the strengths of these models while overcoming their limitations to a certain extent. At the same time, existing fuzzy rough set models often fail to handle overlap factors and multi-granular perspectives effectively, limiting their applicability in dynamic and complex decision-making scenarios (Zhang et al. 2023; Jiang and Hu 2022; Han et al. 2024). While overlap functions have been employed to address certain multi-attribute problems, their integration with grouping functions remains underexplored (Jiang and Hu 2022; Zhang et al. 2024). This gap underscores the need for enhanced models that provide greater precision, adaptability, and scalability, particularly in domains like medical diagnosis and resource management (Jiang and Hu 2022; Zhang et al. 2024).

In light of the aforementioned analyses, a question is presented: Is it possible to integrate the advantages of support intuitionistic fuzzy sets, rough sets, overlap functions, and three-way decision-making to study a three-way decision model based on overlap and grouping functions within the framework of support intuitionistic fuzzy sets? To address this question, this study is motivated by two primary objectives. First, it aims to advance rough set theory by developing a novel framework that incorporates overlap and grouping functions, enabling more effective handling of multi-granular and overlap factors. Second, it seeks to provide practical tools for addressing real-world decision-making challenges by optimizing decision boundaries and enhancing the scalability and flexibility of decision-making methods in various applications. This paper initially constructs upper and lower approximation models for optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets using overlap and grouping functions. Based on these models, the three-way decision rules are optimized to enhance decision accuracy and flexibility in scenarios involving conflicting information. To validate the proposed framework, a consumer decision-making algorithm is developed and applied to a specific case study. The experimental results demonstrate that the multi-granular support intuitionistic fuzzy rough set model based on overlap and grouping functions significantly improves decision-making efficiency and provides a robust approach to handling uncertainty.

This paper is organized as follows: Sect. 2 reviews the fundamental concepts of support intuitionistic fuzzy sets, multi-granular rough sets, overlap functions, and grouping functions. Section 3 presents the methodology for constructing the multi-granular support intuitionistic fuzzy rough set model using overlap functions. Section 4 introduces the support intuitionistic fuzzy set three-way decision model based on overlap functions. Section 5 provides a case analysis, validating the effectiveness of the proposed model. Finally, Sect. 6 concludes the paper and outlines directions for future research.

2 Preliminaries

This section introduces some basic concepts that will be used in this paper, including support intuitionistic fuzzy set model, fuzzy rough set model, n-dimensional overlap function and n-dimensional grouping function and three-way decision theory.

2.1 Support intuitionistic fuzzy sets model

2.1.1 Intuitionistic fuzzy sets

Definition 2.1

(Liang and Liu 2015) Let \( U \) be an non-empty finite universe, \({\tilde{A}} = \left\{ < x,{\mu _{{\tilde{A}}}}(x),\right.\)\(\left.{v_{{\tilde{A}}}}(x) >|x \in U \right\} \) is called intuitionistic fuzzy set, where \({\mu _{\tilde{A}}}(x):U \rightarrow [0,1]\) is the degree of membership of x, \({v_{\tilde{A}}}(x):U \rightarrow [0,1]\) is the degree of non-membership of x, and \(0 \le {\mu _{{\tilde{A}}}}(x) + {v_{{\tilde{A}}}}(x) \le 1\). Call \({\pi _A}(x) = 1 - {\mu _A}(x) - {\nu _A}(x)\) the degree of hesitancy of an element x with respect to an intuitionistic fuzzy set A. The ordered pairs \(({\mu _A},{\nu _A})\) are often called intuitionistic fuzzy numbers. The totality of intuitionistic fuzzy sets on \( U \) is denoted as IFS(U).

Definition 2.2

(Liang and Liu 2015) Let \(A = ({\mu _A},{\nu _A})\) be an intuitionistic fuzzy number, the score function S(A) and the exact function H(A) can be denoted as \(S(A) = {\mu _A} - {\nu _A}\) and \(H(A) = {\mu _A} + {\nu _A}\), respectively, where \(- 1 \le S(A) \le 1\) and \(- 1 \le H(A) \le 1\).

2.1.2 Support intuitionistic fuzzy sets

Definition 2.3

(Nguyen 2015) Let \( U \) be an non-empty finite universe, \({\tilde{A}} = \left\{ < \right. \)\(\left. x,{\mu _{{\tilde{A}}}}(x),{v_{{\tilde{A}}}}(x),{\theta _{{\tilde{A}}}}(x) >|x \in U \right\} \) is called support intuitionistic fuzzy set, where \({\mu _{{\tilde{A}}}}(x):U \rightarrow [0,1]\) is the degree of membership of x, \({v_{{\tilde{A}}}}(x):U \rightarrow [0,1]\) is the degree of non-membership of x, \({\theta _{{\tilde{A}}}}(x):U \rightarrow [0,1]\) is the degree of support membership of x, and \(0 \le {\mu _{{\tilde{A}}}}(x) + {v_{{\tilde{A}}}}(x) \le 1\), \(0 \le {\theta _{{\tilde{A}}}}(x) \le 1\). \({\theta _{{\tilde{A}}}}(x)\) denotes the degree of support that x belongs to A, which has similar properties to the degree of memebership. The support intuitionistic fuzzy number is denoted by \(a = ({\mu _a},{\nu _a},{\theta _a})\). The totality of support intuitionistic fuzzy sets on U is denoted as SIFS(U).

Definition 2.4

(Nguyen 2015) For all \( x \in U\), \({\tilde{A}},{\tilde{B}} \in SIFS(U)\) satisfy the following relations and basic operations: where \(\{ < x,1,0,1 >|x \in U\}\) is abbreviated as U and \(\{ < x,0,1,0 >|x \in U\}\) is abbreviated as \(\emptyset \).

  1. (i)

    \({\tilde{A}} \subseteq {\tilde{B}} \Leftrightarrow {\mu _{\tilde{A}}}(x) \le {\mu _{{\tilde{B}}}}(x),{\nu _{{\tilde{A}}}}(x) \ge {\nu _{{\tilde{B}}}}(x),{\theta _{{\tilde{A}}}}(x) \le {\theta _{{\tilde{B}}}}(x), \{ < x,1,0,1 >|x \in U\};\)

  2. (ii)

    \({\tilde{A}} = {\tilde{B}} \Leftrightarrow {\mu _{{\tilde{A}}}}(x) = {\mu _{{\tilde{B}}}}(x),{\nu _{{\tilde{A}}}}(x) = {\nu _{\tilde{B}}}(x),{\theta _{{\tilde{A}}}}(x) = {\theta _{{\tilde{B}}}}(x);\)

  3. (iii)

    \({\tilde{A}} \cup {\tilde{B}} = \{ < x,\max ({\mu _{\tilde{A}}}(x),{\mu _{{\tilde{B}}}}(x)),\min ({\nu _{{\tilde{A}}}}(x),{\nu _{{\tilde{B}}}}(x)\}, \max ({\theta _{{\tilde{A}}}}(x),{\theta _{\tilde{B}}}(x)) \)\( >|x \in U\};\)

  4. (iv)

    \({\tilde{A}} \cap {\tilde{B}} = \{ < x,\min ({\mu _{\tilde{A}}}(x),{\mu _{{\tilde{B}}}}(x)),\max ({\nu _{{\tilde{A}}}}(x),{\nu _{{\tilde{B}}}}(x)\}, \min ({\theta _{{\tilde{A}}}}(x),{\theta _{\tilde{B}}}(x)) \)\( >|x \in U\};\)

  5. (v)

    \({{\tilde{A}}^c} = \{ < x,{\nu _{{\tilde{A}}}}(x),{\mu _{\tilde{A}}}(x),1 - {\theta _{{\tilde{A}}}}(x) >|x \in U\};\)

  6. (vi)

    \({\tilde{A}} \oplus {\tilde{B}} = \{ < x,{\mu _{{\tilde{A}}}}(x) + {\mu _{{\tilde{B}}}}(x) - {\mu _{{\tilde{A}}}}(x) \cdot {\mu _{\tilde{B}}}(x),{\nu _{{\tilde{A}}}}(x) \cdot {\nu _{{\tilde{B}}}}(x),{\theta _{{\tilde{A}}}}(x) + {\theta _{{\tilde{B}}}}(x) \)\( - {\theta _{{\tilde{A}}}}(x) \cdot {\theta _{{\tilde{B}}}}(x) >|x \in U\};\)

  7. (vii)

    \({\tilde{A}} \otimes {\tilde{B}} = \{ < x,{\mu _{{\tilde{A}}}}(x) \cdot {\mu _{{\tilde{B}}}}(x),{\nu _{{\tilde{A}}}}(x) + {\nu _{{\tilde{B}}}}(x) - {\nu _{{\tilde{A}}}}(x) \cdot {\nu _{{\tilde{B}}}}(x),{\theta _{\tilde{A}}}(x) \cdot {\theta _{{\tilde{B}}}}(x)\)\( >|x \in U\};\)

  8. (viii)

    \(\lambda {\tilde{A}} = \{ < x,1 - {(1 - {\mu _{\tilde{A}}}(x))^\lambda },{({\nu _{{\tilde{A}}}}(x))^\lambda },1 - {(1 - {\theta _{{\tilde{A}}}}(x))^\lambda }>|x \in U\},\lambda > 0;\)

  9. (ix)

    \({{\tilde{A}}^\lambda } = \{ < x,{({\mu _{{\tilde{A}}}}(x))^\lambda },1 - {(1 - {\nu _{{\tilde{A}}}}(x))^\lambda },{({\theta _{\tilde{A}}}(x))^\lambda }>|x \in U\},\lambda > 0.\)

Definition 2.5

(Huang et al. 2019) Let \( U \) and \( V \) be two non-empty finite universes, \({\tilde{R}}\) are support intuitionistic fuzzy relations from \( U \) to \( V \), A support intuitionistic fuzzy subset \({\tilde{R}}\) on \(U \times V\) can be called a support intuitionistic fuzzy relation (SIFR).

The SIFR is defined as: \({\tilde{R}} = \{ < (x,y),{\mu _{\tilde{R}}}(x,y),{\nu _{{\tilde{R}}}}(x,y),{\theta _{{\tilde{R}}}}(x,y) >|(x,y) \in U \times V\},\) where \({\mu _{{\tilde{R}}}}(x,y):U \times V \rightarrow [0,1]\), \({\nu _{{\tilde{R}}}}(x,y):U \times V \rightarrow [0,1]\), \({\theta _{{\tilde{R}}}}(x,y):U \times V \rightarrow [0,1]\) and for all \((x,y) \in U \times V\), \(0 \le {\mu _{{\tilde{R}}}}(x,y) + {\nu _{{\tilde{R}}}}(x,y) \le 1\), \(0 \le {\theta _{{\tilde{R}}}}(x,y) \le 1\). The \(SIF{\tilde{R}}\) can be expressed by the relational matrix as: \({M_{{\tilde{R}}}} = {\{ < {\mu _{{\tilde{R}}}}(x,y),{\nu _{{\tilde{R}}}}(x,y),{\theta _{\tilde{R}}}(x,y) > \} _{n \times n}}.\)

Remark 2.1

Let \({\tilde{R}} \in SIFS(U \times V)\), \({\tilde{R}}\) is said to be continuous if for all \(x \in U\), there exists \(y \in V\) such that \({\mu _{{\tilde{R}}}}(x,y) = 1,{\nu _{{\tilde{R}}}}(x,y) = 0,{\theta _{{\tilde{R}}}}(x,y) = 1.\)

2.2 Rough sets model

2.2.1 Rough sets

Definition 2.6

(Pawlak 1998) Let \( U \) be an non-empty finite universe, R are equivalence relations from \( U \) to \( V \), \({[x]_R}\) is the set of equivalence classes induced by R, for all \(X \subseteq U\). The lower and upper approximations of X are defined as: \(\underline{R}(X) = \{ {[x]_R} \subseteq X, x \in U\}, {\bar{R}}(X) = \{ {[x]_R} \cap X \ne \emptyset, x \in U\}\). We call X is definable if \(\underline{R}(X) = {\bar{R}}(X)\) holds, otherwise X is rough.

From the lower and upper approximations of X, the positive, negative and boundary regions of X are defined as follows:

$$\begin{aligned} POS_{R}(X)&= \underline{R}(X); \\ NEG_{R}(X)&= U - {\bar{R}}(X); \\ BND_{R}(X)&= {\bar{R}}(X) - \underline{R}(X). \end{aligned}$$

2.2.2 Intuitionistic fuzzy rough sets

Definition 2.7

(Atanassov 1986) Let \( U \) be an non-empty finite universe, R are equivalence relations from \( U \) to \( V \), the totality of intuitionistic fuzzy rough sets on U is denoted as IFRS(U), for all \(A \in IFRS(U)\), The lower and upper approximations of are defined as follows, respectively:

$$\begin{aligned} \underline{{{{\tilde{R}}}_U}} (A)&= \{< y,{\mu _{\underline{{{{\tilde{R}}}_U}} (A)}}(y),{v_{\underline{{{{\tilde{R}}}_U}} (A)}}(y)> \mathrm{{|}}y \in V\};\\ \overline{{{{\tilde{R}}}_U}} (A)&= \{ < y,{\mu _{\overline{{{\tilde{R}}_U}} (A)}}(y),{v_{\overline{{{{\tilde{R}}}_U}} (A)}}(y) >|y \in V\}. \end{aligned}$$

where

$$\begin{aligned} {\mu _{\underline{{{{\tilde{R}}}_U}} (A)}}(y)&= \mathop {\wedge }\limits _{x \in U} \{ {\mu _A}(x) \vee {v_{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(1)
$$\begin{aligned} {v_{\underline{{{{\tilde{R}}}_U}} (A)}}(y)&= \mathop {\vee }\limits _{x \in U} \{ {v_A}(x) \wedge {\mu _{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(2)
$$\begin{aligned} {\mu _{\overline{{{{\tilde{R}}}_U}} (A)}}(y)&= \mathop {\vee }\limits _{x \in U} \{ {\mu _A}(x) \wedge {\mu _{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(3)
$$\begin{aligned} {v_{\overline{{{{\tilde{R}}}_U}} (A)}}(y)&= \mathop {\wedge }\limits _{x \in U} \{ {v_A}(x) \vee {v_{{\tilde{R}}}}(x,y)\}. \end{aligned}$$
(4)

We call A is definable if \(\underline{{{{\tilde{R}}}_U}} (A) = \overline{{{{\tilde{R}}}_U}} (A)\) holds, otherwise A is rough.

2.2.3 Support intuitionistic fuzzy rough sets

Definition 2.8

(Xue et al. 2020) Let \( U \) be an non-empty finite universe, R are equivalence relations from \( U \) to \( V \), the totality of support intuitionistic fuzzy rough sets on U is denoted as SIFRS(U), for all \({\tilde{A}} \in SIFRS(U)\), The lower and upper approximations of \({\tilde{A}}\) are defined as follows, respectively:

$$\begin{aligned} \underline{{{{\tilde{R}}}_U}} ({\tilde{A}})&= \{< y,{\mu _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),{v_{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),{\theta _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)> \mathrm{{|}}y \in V\};\\ \overline{{{{\tilde{R}}}_U}} ({\tilde{A}})&= \{ < y,{\mu _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),{v_{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),{\theta _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y) >|y \in V\}. \end{aligned}$$

where

$$\begin{aligned} {\mu _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {v_{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(5)
$$\begin{aligned} {v_{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{x \in U} \{ {v_{{\tilde{A}}}}(x) \wedge {\mu _{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(6)
$$\begin{aligned} {\theta _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{\tilde{R}}}}(x,y))\}; \end{aligned}$$
(7)
$$\begin{aligned} {\mu _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(8)
$$\begin{aligned} {v_{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{x \in U} \{ {v_{{\tilde{A}}}}(x) \vee {v_{{\tilde{R}}}}(x,y)\}; \end{aligned}$$
(9)
$$\begin{aligned} {\theta _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{\tilde{R}}}(x,y)\}. \end{aligned}$$
(10)

We call \({\tilde{A}}\) is definable if \(\underline{{{{\tilde{R}}}_U}} ({\tilde{A}}) = \overline{{{{\tilde{R}}}_U}} ({\tilde{A}})\) holds, otherwise \({\tilde{A}}\) is rough.

Some properties of the supported intuitionistic fuzzy sets for \(U=V\) are given below.

Proposition 2.1

(Xue et al. 2020) Let \((U,{\tilde{R}})\) is a generalized support intuitionistic fuzzy approximation space, \({\tilde{R}}\) is a support intuitionistic fuzzy relation on \( U \), for all \({\tilde{A}} \in SIFRS(U)\), \({\tilde{B}} \in SIFRS(U)\), \(\underline{{\tilde{R}}} ({\tilde{A}}) (\underline{\tilde{R}} ({\tilde{B}}))\) and \(\overline{{\tilde{R}}} ({\tilde{A}}) (\overline{{\tilde{R}}} ({\tilde{B}}))\) satisfies the following properties:

  1. (i)

    \(\underline{{\tilde{R}}} (U) = \overline{{\tilde{R}}} (U) = U,\underline{{\tilde{R}}} (\emptyset ) = \overline{{\tilde{R}}} (\emptyset ) = \emptyset;\)

  2. (ii)

    \(\overline{{\tilde{R}}} ({\tilde{A}} \cup {\tilde{B}}) = \overline{{\tilde{R}}} ({\tilde{A}}) \cup \overline{{\tilde{R}}} (\tilde{B}),\underline{{\tilde{R}}} ({\tilde{A}} \cap {\tilde{B}}) = \underline{{\tilde{R}}} ({\tilde{A}}) \cap \underline{{\tilde{R}}} ({\tilde{B}});\)

  3. (iii)

    \(\overline{{\tilde{R}}} ({\tilde{A}} \cap {\tilde{B}}) \subseteq \overline{{\tilde{R}}} ({\tilde{A}}) \cap \overline{{\tilde{R}}} (\tilde{B}),\underline{{\tilde{R}}} ({\tilde{A}} \cup {\tilde{B}}) \supseteq \underline{{\tilde{R}}} ({\tilde{A}}) \cup \underline{{\tilde{R}}} (\tilde{B});\)

  4. (iv)

    \(\underline{{\tilde{R}}} ({{\tilde{A}}^c}) = {(\overline{{\tilde{R}}} ({\tilde{A}}))^c},\overline{{\tilde{R}}} ({{\tilde{A}}^c}) = {(\underline{{\tilde{R}}} ({\tilde{A}}))^c};\)

  5. (v)

    If \({\tilde{A}} \subseteq {\tilde{B}}\), then \(\underline{{\tilde{R}}} ({\tilde{A}}) \subseteq \underline{{\tilde{R}}} ({\tilde{B}}),\overline{{\tilde{R}}} ({\tilde{A}}) \subseteq \overline{{\tilde{R}}} ({\tilde{B}}).\)

2.2.4 Multi-granular support intitionistic fuzzy rough sets

Definition 2.9

(Xue et al. 2020) Let \( U \) and \( U \) be two non-empty finite universes, \({\tilde{R}}\) are multi-granular support intuitionistic fuzzy relations from \( U \) to \( V \), \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{{\tilde{R}}_m}\} \), for all \(\tilde{A} \in SIFRS(U)\), The optimistic lower and upper approximations of \({\tilde{A}}\) are defined as follows, respectively:

$$\begin{aligned} \underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{< y,{\mu _{\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{\theta _{\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)>|y \in V\}; \\ \overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{ < y,{\mu _{\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\tilde{A})}}(y),{\theta _{\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y) >|y \in V\}. \end{aligned}$$

where

$$\begin{aligned} {\mu _{\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{i = 1}^m \mathop \wedge \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(11)
$$\begin{aligned} {\nu _{\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{i = 1}^m \mathop \vee \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(12)
$$\begin{aligned} {\theta _{\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{i = 1}^m \mathop \wedge \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\}; \end{aligned}$$
(13)
$$\begin{aligned} {\mu _{\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{i = 1}^m \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(14)
$$\begin{aligned} {\nu _{\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{i = 1}^m \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(15)
$$\begin{aligned} {\theta _{\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{i = 1}^m \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\}. \end{aligned}$$
(16)

For all \(y \in V\), We call \({\tilde{A}}\) is definable if \(\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) holds, otherwise \({\tilde{A}}\) is rough.

Definition 2.10

(Xue et al. 2020) Let \( U \) and \( U \) be two non-empty finite universes, \({\tilde{R}}\) are multi-granular support intuitionistic fuzzy relations from \( U \) to \( V \), \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{{\tilde{R}}_m}\} \), for all \(\tilde{A} \in SIFRS(U)\), The pessimistic lower and upper approximations of \({\tilde{A}}\) are defined as follows, respectively:

$$\begin{aligned} \underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{< y,{\mu _{\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{\theta _{\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)>|y \in V\};\\ \overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{ < y,{\mu _{\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\tilde{A})}}(y),{\theta _{\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y) >|y \in V\}. \end{aligned}$$

where

$$\begin{aligned} {\mu _{\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{i = 1}^m \mathop \wedge \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(17)
$$\begin{aligned} {\nu _{\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{i = 1}^m \mathop \vee \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(18)
$$\begin{aligned} {\theta _{\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{i = 1}^m \mathop \wedge \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\}; \end{aligned}$$
(19)
$$\begin{aligned} {\mu _{\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{i = 1}^m \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(20)
$$\begin{aligned} {\nu _{\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\wedge }\limits _{i = 1}^m \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\}; \end{aligned}$$
(21)
$$\begin{aligned} {\theta _{\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {\vee }\limits _{i = 1}^m \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\}. \end{aligned}$$
(22)

For all \(y \in V\), We call \({\tilde{A}}\) is definable if \(\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) holds, otherwise \({\tilde{A}}\) is rough.

2.3 t-norm and t-conorm

Definition 2.11

(Zhan et al. 2019) A t-norm T is a binary function on the unit interval [0,1] that satisfies the law of exchange, the law of union, and monotonicity, and for all \(x \in [0,1],T(x,1) = x\). The following are common t-norm:

  1. (i)

    \({T_M}(x,y) = \min (x,y)\) (Standard Minimum Operator);

  2. (ii)

    \({T_P}(x,y) = x * y\) (Product Operator);

  3. (iii)

    \({T_L}(x,y) = \max (x + y - 1,0)\) (Lukasiewicz t-norm).

Definition 2.12

(Zhan et al. 2019) A t-conorm S is a binary function on the unit interval [0,1] that satisfies the law of exchange, the law of union, and monotonicity, and for all \(x \in [0,1],S(x,0) = x\). The following are common t-conorm:

  1. (i)

    \({S_M}(x,y) = \max (x,y)\) (Standard Maximum Operator);

  2. (ii)

    \({S_P}(x,y) = x + y - x * y\) (Sum of Probabilities);

  3. (iii)

    \({S_L}(x,y) = \min (x + y,1)\) (Sum of Boundary).

2.4 n-dimensional overlap functions and n-dimensional grouping functions

Definition 2.13

(Bustince et al. 2010) Let \(O:{[0,1]^n} \rightarrow [0,1],n \ge 2\) be a n-dimensional function, O is called n-dimensional overlap function, if O satisfy the following conditions:

  1. (i)

    Commutativity: \(O({x_1},{x_2}, \cdots,{x_i}, \cdots,{x_j}, \cdots,{x_n}) = O({x_1},{x_2}, \cdots,{x_j}, \cdots,{x_i},\cdots, \)\({x_n});\)

  2. (ii)

    Border condition: \(O({x_1},{x_2}, \cdots,{x_n}) = 0\), for \(\prod \limits _{i = 1}^n {{x_i} = 0};\)

  3. (iii)

    Border condition: \(O({x_1},{x_2}, \cdots,{x_n}) = 1\), for \(\prod \limits _{i = 1}^n {{x_i} = 1};\)

  4. (iv)

    Monotonicity: \(O({x_i},{x_j}) \le O({x_i},{x_k})\), for all \({x_i} \le {x_k};\)

  5. (v)

    Continuity: O is a continuous function.

the overlap function O is said to be idempotent if for all \( x \in [0,1],O(x,x, \cdots,x) = x\).

If the overlap function O satisfies \(O({x_1},{x_2}, \cdots,{x_n}) \le {x_1} \wedge {x_2} \wedge \cdots \wedge {x_n}\), then the overlap function O is said to be 1-distribution tight.

Example 2.1

(De et al. 2019) Some examples of n-dimension functions:

  1. (i)

    \({O_{\min }}({x_1},{x_2}, \cdots,{x_n}) = \min ({x_1},{x_2}, \cdots,{x_n});\)

  2. (ii)

    \({O_{\frac{1}{n}}}({x_1},{x_2}, \cdots,{x_n}) = {\left( {\prod \limits _{i = 1}^n {{x_i}} } \right) ^{\frac{1}{n}}};\)

  3. (iii)

    \({O_\lambda }({x_1},{x_2}, \cdots,{x_n}) = \min {({x_1},{x_2}, \cdots,{x_n})^{1 - \lambda }} \max {({x_1},{x_2}, \cdots,{x_n})^\lambda };\)

  4. (iv)

    \({O_{D{B_n}}}({x_1},{x_2},...{x_n}) = \left\{ \begin{array}{l}{\left( {\frac{{n\prod \limits _{i = 1}^n {{x_i}} }}{{\sum \limits _{i = 1}^n {{x_i}} }}} \right) ^{\frac{1}{{n - 1}}}}\mathrm{{, }}\sum \limits _{i = 1}^n {{x_i}} \ne 0,\\ 0,\mathrm{ }\sum \limits _{i = 1}^n {{x_i}} = 0.\mathrm{ }\end{array} \right. \)

Remark 2.2

The overlap function \(O_\lambda (x_1, x_2, \dots, x_n) = \min (x_1, x_2, \dots, x_n)^{1 - \lambda } \cdot \max\)\( (x_1, x_2, \dots, x_n)^\lambda \) can be illustrated with a simple example. Assume \(n = 3\), \(x_1 = 0.6\), \(x_2 = 0.8\), \(x_3 = 0.9\), and \(\lambda = 0.5\). Then: \(O_\lambda (0.6, 0.8, 0.9) = \sqrt{0.6} \cdot \sqrt{0.9} \approx 0.7746.\)

This overlap function balances the minimum and maximum values of the inputs, providing a similarity measure between them.

Definition 2.14

(Bustince et al. 2010) Let \(G:{[0,1]^n} \rightarrow [0,1],n \ge 2\) be a n-dimensional function, G is called n-dimensional grouping function, if G satisfy the following conditions:

  1. (i)

    Commutativity: \(G({x_1},{x_2}, \cdots,{x_i}, \cdots,{x_j}, \cdots,{x_n}) = G({x_1},{x_2}, \cdots,{x_j}, \cdots,{x_i},\cdots, \)\( {x_n});\)

  2. (ii)

    Border condition: \(G({x_1},{x_2}, \cdots,{x_n}) = 0\), for \(\sum \limits _{i = 1}^n {{x_i}} = 0;\)

  3. (iii)

    Border condition: \(G({x_1},{x_2}, \cdots,{x_n}) = 1\), for \({x_i} = 1;\)

  4. (iv)

    Monotonicity: \(G({x_i},{x_j}) \le G({x_i},{x_k})\), for all \({x_i} \le {x_k};\)

  5. (v)

    Continuity: G is a continuous function.

the grouping function G is said to be idempotent if for all \( x \in [0,1], G(x,x, \cdots,x) = x\).

If the grouping function G satisfies \(G({x_1},{x_2}, \cdots,{x_n}) \ge {x_1} \vee {x_2} \vee \cdots \vee {x_n}\), then the grouping function G is said to be 0-distribution expansion.

Example 2.2

(De et al. 2019) Some examples of n-dimension functions:

  1. (i)

    \({G_{\max }}({x_1},{x_2}, \cdots,{x_n}) = \max ({x_1},{x_2}, \cdots,{x_n});\)

  2. (ii)

    \({G_{{O_{\frac{1}{n}}}}}({x_1},{x_2}, \cdots,{x_n}) = 1 - {\left( {\prod \limits _{i = 1}^n {(1 - {x_i})} } \right) ^{\frac{1}{n}}};\)

  3. (iii)

    \({G_{{O_\lambda }}}({x_1},{x_2}, \cdots,{x_n}) = \max {({x_1},{x_2}, \cdots,{x_n})^{1 - \lambda }} \min {({x_1},{x_2}, \cdots,{x_n})^\lambda };\)

  4. (iv)

    \({G_{{O_{D{B_n}}}}}({x_1},{x_2},...{x_n}) = \left\{ \begin{array}{l}1 - {\left( {\frac{{n\prod \limits _{i = 1}^n {(1 - {x_i})} }}{{\sum \limits _{i = 1}^n {(1 - {x_i})} }}} \right) ^{\frac{1}{{n - 1}}}}\mathrm{{, }}\sum \limits _{i = 1}^n {{x_i}} \ne 0,\\ 0,\mathrm{ }\sum \limits _{i = 1}^n {{x_i}} = 0.\mathrm{ }\end{array} \right. \)

Remark 2.3

The grouping function \(G_\lambda (x_1, x_2, \dots, x_n) = \max (x_1, x_2, \dots, x_n)^{1 - \lambda }\cdot \min\)\( (x_1, x_2, \dots, x_n)^\lambda \) can be demonstrated using an example. Assume \(n = 3\), \(x_1 = 0.3\), \(x_2 = 0.5\), \(x_3 = 0.7\), and \(\lambda = 0.4\). Then: \(G_\lambda (0.3, 0.5, 0.7) = 0.7^{1 - 0.4} \cdot 0.3^{0.4} \approx 0.6493.\)

This grouping function combines the maximum and minimum values of the inputs, providing a measure that highlights the range of input values while balancing their contributions.

Remark 2.4

\({G_O}({x_1},{x_2}, \cdots,{x_n}) = 1 - O(1 - {x_1},1 - {x_2}, \cdots,1 - {x_n})\) is a n-dimensionl grouping function if O is a n-dimension overlap function.

2.5 Three-way decisions

Definition 2.15

(Yao 2011) Let \( U \) be a non-empty universe, \(\omega = \{ C, \sim C\}\) denotes the set of states, where x belongs to state C and does not belong to state C, respectively. \(\xi = \{ {\alpha _P},{\alpha _B},{\alpha _N}\}\) denotes the set of actions, where \({\alpha _P}\), \({\alpha _B}\) and \({\alpha _N}\) denote the three decision actions classified into the positive domain POS(C), the boundary domain BND(C) and the negative domain NEG(C), respectively. When x belongs to C, the cost losses resulting from actions \({\alpha _P}\), \({\alpha _B}\) and \({\alpha _N}\) are denoted by \({\lambda _{PP}}\), \({\lambda _{BP}}\) and \({\lambda _{NP}}\), respectively. Similarly \({\lambda _{PN}}\), \({\lambda _{BN}}\) and \({\lambda _{NN}}\) denote the cost loss caused by taking the same action when x does not belong to C. The cost matrix of the decision actions is shown in Table 1:

According to the Bayesian risk minimization decision principle, the three-way decision rule can be obtained as follows:

  1. (i)

    Acceptance of decision-making rules: If \(P(C|[x]) \ge \alpha'\), then \(x \in POS(C)\);

  2. (ii)

    Deferment of decision-making rules: If \(\beta' \le P(C|[x]) \le \alpha'\), then \(x \in BND(C)\);

  3. (iii)

    Rejection of decision-making rules: If \(P(C|[x]) \le \beta'\), then \(x \in NEG(C)\).

Table 1 Cost matrix for three decision actions

Remark 2.5

The application of the three-way decision models is extremely broad. For example, in a customer decision-making scenario, the three-way decision model can help determine whether a consumer will buy a product (positive domain), needs more time to decide (boundary domain), or decides not to buy (negative domain). These decisions can be based on factors such as product ratings, price, and customer reviews.

3 Multi-granular support intuitionistic fuzzy rough set based on overlap function

This section introduces a novel multi-granular support intuitionistic fuzzy rough set model based on overlap function (OMGSIFRS), which utilizes two aggregation functions that are not necessarily combined to construct a rough set model, and is an extension of t-norms base rough set theory. On the one hand, continuous t-norms without nonzero factors are overlap functions, and on the other hand, there are overlap functions for t-conorms, so the support intuitionistic fuzzy rough set constructed by overlap functions is a class of rough set models related to t-norms bases and different from t-norms bases.

3.1 Optimistic multi-granular support intuitionistic fuzzy rough sets based on overlap functions

Definition 3.1

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), where O represents the n-dimension overlap function and G represents the n-dimension grouping function, The optimistic lower and upper approximations of \({\tilde{A}}\) are defined as follows, respectively:

$$\begin{aligned} \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{< y,{\mu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{O\underline{{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{\theta _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)>|y \in V\};\\ \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{ < y,{\mu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\tilde{A})}}(y),{\theta _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y) >|y \in V\}. \end{aligned}$$

where

$$\begin{aligned} {\mu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {G}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(23)
$$\begin{aligned} {\nu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {O}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(24)
$$\begin{aligned} {\theta _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {G}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\} \}; \end{aligned}$$
(25)
$$\begin{aligned} {\mu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {O}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(26)
$$\begin{aligned} {\nu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {G}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(27)
$$\begin{aligned} {\theta _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {O}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \}. \end{aligned}$$
(28)

For all \(y \in V\), We call \({\tilde{A}}\) is definable in a multi-granular support intuitionistic fuzzy approximation space on two universes \((U,V,\tilde{\Re })\) if \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\mathrm{{ = }}\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) holds, otherwise \({\tilde{A}}\) is rough.

3.2 Pessimistic multi-granular support intuitionistic fuzzy rough sets based on overlap functions

Definition 3.2

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), where O represents the n-dimension overlap function and G represents the n-dimension grouping function, The pessimistic lower and upper approximations of \({\tilde{A}}\) are defined as follows, respectively:

$$\begin{aligned} \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{< y,{\mu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{\theta _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)>|y \in V\};\\ \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})&= \{ < y,{\mu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),{v_{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\tilde{A})}}(y),{\theta _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y) >|y \in V\}. \end{aligned}$$

where

$$\begin{aligned} {\mu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {O}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(29)
$$\begin{aligned} {\nu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {G}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(30)
$$\begin{aligned} {\theta _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {O}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\} \}; \end{aligned}$$
(31)
$$\begin{aligned} {\mu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {G}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(32)
$$\begin{aligned} {\nu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {O}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \}; \end{aligned}$$
(33)
$$\begin{aligned} {\theta _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)&= \mathop {G}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \}. \end{aligned}$$
(34)

For all \(y \in V\), We call \({\tilde{A}}\) is definable in a multi-granular support intuitionistic fuzzy approximation space on two universes \((U,V,\tilde{\Re })\) if \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\mathrm{{ = }}\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) holds, otherwise \({\tilde{A}}\) is rough.

Remark 3.1

Compared with Definition 2.9, 2.10, the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets based on overlap function (OOMGSIFRS and OPMGSIFRS) replace the merge operation in Eq. (11) with grouping function and the intersection operation in Eq. (12) with overlap function in describing the upward and downward approximation of the approximated objects. From the point of view of fuzzy logic, grouping function is the promotion of logical ’or’ and overlap function is the promotion of logical ’and’, so it is an effective extension of the original model to use grouping function and overlap function to replace merge and intersection operation respectively. However, this new model no longer requires the aggregation process to be combinatorial, so it is a multi-granular support intuitionistic fuzzy rough set model that is broader than the t-norm base model.

Proposition 3.1

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets satisfy the following properties:

  1. (i)

    \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop G\limits _{i = 1}^m (\underline{{{{\tilde{R}}}_1}} ({\tilde{A}}),\underline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \underline{{{{\tilde{R}}}_m}} ({\tilde{A}})),\)\(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop O\limits _{i = 1}^m (\underline{{{{\tilde{R}}}_1}} ({\tilde{A}}),\underline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \underline{{{{\tilde{R}}}_m}} ({\tilde{A}}));\)

  2. (ii)

    \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop O\limits _{i = 1}^m (\overline{{{{\tilde{R}}}_1}} ({\tilde{A}}),\overline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \overline{{{{\tilde{R}}}_m}} ({\tilde{A}})),\)\(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop G\limits _{i = 1}^m (\overline{{{{\tilde{R}}}_1}} ({\tilde{A}}),\overline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \overline{{{{\tilde{R}}}_m}} ({\tilde{A}})).\)

Proof

The above property clearly holds when \({\tilde{\Re }_1} = {\tilde{\Re }_2} = \cdots = {\tilde{\Re }_m},\) and O is an idempotent overlap function. The following shows that the above property holds when \({\tilde{\Re }_i}\) are not all equal and the overlap functions O are not necessarily idempotent.

(i) Combining Definition 2.9 and Definition 3.1, it follows that, Let \(U = \{ < {x_j},\)\(1,0,1 >|1 \le j \le m\}\), \(\emptyset = \{ < {x_j},0,1,0 >|1 \le j \le m\}\).

$$\begin{aligned} {\mu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {O}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop O\limits _{i = 1}^m {\mu _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\nu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {G}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop G\limits _{i = 1}^m {v_{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\theta _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {O}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\} \} = \mathop O\limits _{i = 1}^m {\theta _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\mu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {G}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} = \mathop G\limits _{i = 1}^m {\mu _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\nu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {O}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop O\limits _{i = 1}^m {v_{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\theta _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {G}\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\} \} = \mathop {G}\limits _{i = 1}^m {\theta _{\underline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y). \end{aligned}$$

Therefore, \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop G\limits _{i = 1}^m (\underline{{{{\tilde{R}}}_1}} ({\tilde{A}}),\underline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \underline{{{{\tilde{R}}}_m}} ({\tilde{A}}))\) and \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop O\limits _{i = 1}^m (\underline{{{{\tilde{R}}}_1}} (\tilde{A}),\underline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \underline{{{{\tilde{R}}}_m}} ({\tilde{A}}))\).

(ii) Same as (i),

$$\begin{aligned} {\mu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {O}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop O\limits _{i = 1}^m {\mu _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\nu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {G}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop G\limits _{i = 1}^m {v_{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\theta _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {O}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop O\limits _{i = 1}^m {\theta _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\mu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {G}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop G\limits _{i = 1}^m {\mu _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\nu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {O}\limits _{i = 1}^m \{ \mathop {\wedge }\limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop O\limits _{i = 1}^m {v_{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y),\\ {\theta _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop {G}\limits _{i = 1}^m \{ \mathop {\vee }\limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \} = \mathop {G}\limits _{i = 1}^m {\theta _{\overline{{{{\tilde{R}}}_U}} ({\tilde{A}})}}(y). \end{aligned}$$

Therefore, \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \mathop O\limits _{i = 1}^m (\overline{{{{\tilde{R}}}_1}} ({\tilde{A}}),\overline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \overline{{{{\tilde{R}}}_m}} ({\tilde{A}}))\) and \(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) = \)\(\mathop G\limits _{i = 1}^m (\overline{{{{\tilde{R}}}_1}} (\tilde{A}),\overline{{{{\tilde{R}}}_2}} ({\tilde{A}}), \cdots \overline{{{\tilde{R}}_m}} ({\tilde{A}}))\). \(\square \)

Proposition 3.2

Let \((U,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space, where \(\tilde{\Re }= \{ {\tilde{R}_1},{{\tilde{R}}_2}, \cdots,{{\tilde{R}}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets satisfy the following properties:

  1. (i)

    \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  2. (ii)

    \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}).\)

Proof

(i) Combining Definition 3.1 and Definition 3.2, where O is an idempotent overlap function, G is an idempotent grouping function. Firstly, prove that for all \( {x_1},{x_2}, \cdots,{x_n} \in [0,1]\), \(\min ({x_1},{x_2}, \cdots,{x_n}) \le O({x_1},{x_2}, \cdots,{x_n}) \le \max ({x_1},{x_2}, \cdots,{x_n})\).

case 1 When \({x_1} = {x_2} = \cdots = {x_n}\) are not all equal, the above equation obviously holds.

case 2 When \({x_1},{x_2}, \cdots,{x_n}\) are not all equal, \({x_1} = \min ({x_1},{x_2}, \cdots,{x_n})\) and \({x_n} = \max ({x_1},{x_2}, \cdots,{x_n})\), it follows from the monotonicity of the overlap functions that \({x_1} \le O({x_1},{x_1}, \cdots,{x_1}) \le O({x_1},{x_2}, \cdots,{x_n})\) and \(O({x_1},{x_2}, \cdots,{x_n}) \le\)\(O({x_n},{x_n}, \cdots,{x_n}) = {x_n}\).

Therefore. \({x_1} = \min ({x_1},{x_2}, \cdots,{x_n}) \le O({x_1},{x_2}, \cdots,{x_n}) \le \max ({x_1},{x_2}, \cdots,{x_n}) =\)\( {x_n}\).

Similarly. \({x_1} = \min ({x_1},{x_2}, \cdots,{x_n}) \le G({x_1},{x_2}, \cdots,{x_n}) \le \max ({x_1},{x_2}, \cdots,{x_n}) =\)\( {x_n}\). Further:

$$\begin{aligned} {\mu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = {\mu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),\\ {\nu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m [\mathop \vee \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \ge \mathop O\limits _{i = 1}^m [\mathop \vee \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = {\nu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),\\ {\theta _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}(x,y))\} \} = {\theta _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y). \end{aligned}$$

Therefore, \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\).

(ii) Same as (i),

$$\begin{aligned} {\mu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = {\mu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),\\ {\nu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \ge \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} = {\nu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y),\\ {\theta _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \} = {\theta _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y). \end{aligned}$$

Therefore, \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})\). \(\square \)

Proposition 3.3

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), if \({\Re _i} \subseteq {\Re'_i}\), the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets satisfy the following properties:

  1. (i)

    \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}),\)\(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}) \subseteq \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  2. (ii)

    \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}), \)\(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}).\)

Proof

(i) Since \({\Re _i} \subseteq {\Re'_i}\) and O is an idempotent overlap function and G is an idempotent grouping function, then \({\mu _{{{\tilde{\Re }}_i}}}({x_j},y) \le {\mu _{{{\tilde{\Re }'}_i}}}({x_j},y)\), \({\nu _{{{\tilde{\Re }}_i}}}({x_j},y) \ge {\nu _{{{\tilde{\Re }'}_i}}}({x_j},y)\), \({\theta _{{{\tilde{\Re }}_i}}}(x,y) \le {\theta _{{{\tilde{\Re }'}_i}}}(x,y)\).

$$\begin{aligned} {\mu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\mu _{{\tilde{A}}}}({x_j}) \vee {\nu _{{{\tilde{\Re }}_i}}}({x_j},y)\} \} \ge \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\mu _{{\tilde{A}}}}({x_j}) \vee {\nu _{{{\tilde{\Re }'}_i}}}({x_j},y)\} \} \mathrm{{ = }}{\mu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} (\tilde{A})}}(y),\\ {\nu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{i = 1}^n \{ {\nu _{{\tilde{A}}}}({x_j}) \wedge {\mu _{{{\tilde{\Re }}_i}}}({x_j},y)\} \} \le \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{i = 1}^n \{ {\nu _{{\tilde{A}}}}({x_j}) \wedge {\mu _{{{\tilde{\Re }'}_i}}}({x_j},y)\} \} = {\nu _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} (\tilde{A})}}(y),\\ {\theta _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\theta _{{\tilde{A}}}}({x_j}) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}({x_j},y))\} \} \ge \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\theta _{\tilde{A}}}({x_j}) \vee (1 - {\theta _{{{\tilde{\Re }'}_i}}}({x_j},y))\} \} = {\theta _{\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y),\\ {\mu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\mu _{{\tilde{A}}}}({x_j}) \vee {\nu _{{{\tilde{\Re }}_i}}}({x_j},y)\} \} \ge \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\mu _{{\tilde{A}}}}({x_j}) \vee {\nu _{{{\tilde{\Re }'}_i}}}({x_j},y)\} \} \mathrm{{ = }}{\mu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} (\tilde{A})}}(y),\\ {\nu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{i = 1}^n \{ {\nu _{{\tilde{A}}}}({x_j}) \wedge {\mu _{{{\tilde{\Re }}_i}}}({x_j},y)\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{i = 1}^n \{ {\nu _{{\tilde{A}}}}({x_j}) \wedge {\mu _{{{\tilde{\Re }'}_i}}}({x_j},y)\} \} = {\nu _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} (\tilde{A})}}(y),\\ {\theta _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\theta _{{\tilde{A}}}}({x_j}) \vee (1 - {\theta _{{{\tilde{\Re }}_i}}}({x_j},y))\} \} \ge \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{j = 1}^n \{ {\theta _{\tilde{A}}}({x_j}) \vee (1 - {\theta _{{{\tilde{\Re }'}_i}}}({x_j},y))\} \} = {\theta _{\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y). \end{aligned}$$

Therefore \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) and \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}}) \subseteq \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\).

(ii) Same as (i),

$$\begin{aligned} {\mu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }'}_i}}}(x,y)\} \} = {\mu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y),\\ {\nu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \ge \mathop G\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }'}_i}}}(x,y)\} \} = {\nu _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y),\\ {\theta _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop O\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }'}_i}}}(x,y)\} \} = {\theta _{\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} (\tilde{A})}}(y),\\ {\mu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\mu _{{\tilde{A}}}}(x) \wedge {\mu _{{{\tilde{\Re }'}_i}}}(x,y)\} \} = {\mu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y),\\ {\nu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }}_i}}}(x,y)\} \} \ge \mathop O\limits _{i = 1}^m \{ \mathop \wedge \limits _{x \in U} \{ {\nu _{{\tilde{A}}}}(x) \vee {\nu _{{{\tilde{\Re }'}_i}}}(x,y)\} \} = {\nu _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y),\\ {\theta _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})}}(y)= & \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }}_i}}}(x,y)\} \} \le \mathop G\limits _{i = 1}^m \{ \mathop \vee \limits _{x \in U} \{ {\theta _{{\tilde{A}}}}(x) \wedge {\theta _{{{\tilde{\Re }'}_i}}}(x,y)\} \} = {\theta _{\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})}}(y). \end{aligned}$$

Therefore \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})\) and \(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }'}_i}} }}} ({\tilde{A}})\). \(\square \)

Proposition 3.4

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets satisfy the following properties:

  1. (i)

    \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (U) = \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (U) = V,\)\(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\emptyset ) = \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\emptyset ) = \emptyset, \)

       \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (U) = \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (U) = V,\)\(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\emptyset ) = \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} (\emptyset ) = \emptyset;\)

  2. (ii)

    \(\underline{[O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}){]^c} = \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({{\tilde{A}}^c}),\)\({[\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})]^c} = \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({{\tilde{A}}^c}), \)

        \(\underline{[P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}){]^c} = \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({{\tilde{A}}^c}),\)\({[\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})]^c} = \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({{\tilde{A}}^c});\)

  3. (iii)

    \( \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cap {\tilde{B}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cap \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)\(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cup {\tilde{B}}) \supseteq \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cup \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)

        \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cap {\tilde{B}}) \subseteq \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cap \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)\(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cup {\tilde{B}}) \supseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cup \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}});\)

  4. (iv)

    \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cup {\tilde{B}}) \supseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cup \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)\(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cap {\tilde{B}}) \subseteq \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cap \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)

        \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cup {\tilde{B}}) \supseteq \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cup \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)\(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}} \cap {\tilde{B}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \cap \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}).\)

Proof

It is easy to prove by Proposition 2.1 and Definition 3.1 and Definition 3.2. \(\square \)

Proposition 3.5

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), \({\tilde{B}} \subseteq SIFRS(U),{\tilde{A}} \subseteq {\tilde{B}}\), the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets satisfy the following properties:

  1. (i)

    \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)\(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}});\)

  2. (ii)

    \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}),\)\(\overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{B}}).\)

Proof

It is easy to prove by Proposition 2.1 and Definition 3.1 and Definition 3.2. \(\square \)

Proposition 3.6

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), where O is an idempotent overlap function, G is an idempotent grouping function, the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets satisfy the following properties:

  1. (i)

    \(\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  2. (ii)

    \(\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  3. (iii)

    \(\underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  4. (iv)

    \(\overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}).\)

Proof

It is easy to prove by Proposition 2.1 and Definition 3.1 and Definition 3.2. \(\square \)

Proposition 3.7

Let \((U,V,\tilde{\Re })\) be a multi-granular support intuitionistic fuzzy approximation space on two universes, where \(\tilde{\Re }= \{ {{\tilde{R}}_1},{{\tilde{R}}_2}, \cdots,{\tilde{R}_m}\}\), for all \({\tilde{A}} \in SIFRS(U)\), then the following conclusions hold under the condition that the overlap function O is 1-distributiona tight and the grouping function G is 0-distributiona expansion, then the following conclusions hold:

  1. (i)

    \(\underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  2. (ii)

    \(\overline{O{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  3. (iii)

    \(\underline{P{M_{O\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}});\)

  4. (iv)

    \(\overline{P{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \overline{P{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}).\)

Proof

If the overlap function O satisfies 1-distribution tight and the grouping function G satisfies 0-distribution expansion, \(O({x_1},{x_2}, \cdots,{x_n}) \le {x_1} \wedge {x_2} \wedge \cdots \wedge {x_n}\), \(G({x_1},{x_2}, \cdots,{x_n}) \ge {x_1}\forall {x_2} \vee \cdots \wedge {x_n}\). It is easy to prove by Proposition 3.6 and Definition 3.1 and Definition 3.2. \(\square \)

Remark 3.2

From Proposition 2.6 and Proposition 2.7, it can be seen that when constructing multi-granular support intuitionistic fuzzy rough sets based on overlap function, different overlap functions are selected, and the corresponding multi-granular support intuitionistic fuzzy rough sets show different properties. For example, the inequality \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) holds when the overlap function and the grouping function take the idempotent overlap function, but the above relationship no longer holds when the overlap function O is 1-distribution expansion and the grouping function G is 0-distribution tight. Therefore, optimistic or pessimistic multi-granular support intuitionistic fuzzy rough sets based on the overlap function are a class of multi-granular support intuitionistic fuzzy rough set models that are related to but different from t-norm-based multi-granular support intuitionistic fuzzy rough set models.

4 Three-way decision model for multi-granular support intuitionistic fuzzy rough sets based on overlap functions

In this section, the conditional probability is constructed by giving the definitions of similarity measure, positive ideal solution and negative ideal solution, and combined with the cost loss function, the optimistic and pessimistic multi-granular support intuitionistic fuzzy rough set three-way decision model based on overlap function is established. According to the Bayesian minimum risk decision principle, the score function and accuracy function are utilized to obtain the decision rules to give the results and discuss the effectiveness of the model.

Definition 4.1

(Hu and Yao 2019) Let \({\tilde{A}} = ({\mu _{{\tilde{A}}}},{\nu _{{\tilde{A}}}},{\theta _{{\tilde{A}}}})\) is a support intuitionistic fuzzy number, then the score function and exact function are defined as follows respectively: \(s({\tilde{A}}) = {\mu _{{\tilde{A}}}}{\theta _{{\tilde{A}}}} - {\nu _{{\tilde{A}}}}(1 - {\theta _{{\tilde{A}}}})\), \(h({\tilde{A}}) = {\mu _{{\tilde{A}}}}{\theta _{{\tilde{A}}}} + {\nu _{{\tilde{A}}}}(1 - {\theta _{{\tilde{A}}}}).\) Where \(h({\tilde{A}}) = {\mu _{{\tilde{A}}}}{\theta _{{\tilde{A}}}} + {\nu _{{\tilde{A}}}}(1 - {\theta _{{\tilde{A}}}}).\)

Remark 4.1

The score function and the exact function are defined as the difference and the sum of \({\mu _{{\tilde{A}}}}{\theta _{{\tilde{A}}}}\) and \({\nu _{{\tilde{A}}}}(1 - {\theta _{{\tilde{A}}}})\), respectively, where \({\mu _{{\tilde{A}}}}{\theta _{{\tilde{A}}}}\) is the product of memebership and non-memebership, indicating the degree of support memebership of x. \({\nu _{{\tilde{A}}}}(1 - {\theta _{{\tilde{A}}}})\) is the product of non-memebership and support complement, indicating the degree of support non-memebership of x. The score function combines the positive and negative evaluations to determine the likelihood of a consumer’s decision, offering a clear quantitative measure for classification.

Definition 4.2

(Hu and Yao 2019) Let two support intuitionistic fuzzy numbers \({{\tilde{A}}_1}\) and \({{\tilde{A}}_2}\), then their relationship can be defined as follows.

  1. (i)

    If \(s({{\tilde{A}}_1}) > s({{\tilde{A}}_2})\), then \({{\tilde{A}}_1}\) is greater than \({{\tilde{A}}_2}\) and can be expressed as \({{\tilde{A}}_1} > {{\tilde{A}}_2}\);

  2. (ii)

    If \(s({{\tilde{A}}_1}) < s({{\tilde{A}}_2})\), then \({{\tilde{A}}_1}\) is less than \({{\tilde{A}}_2}\) and can be expressed as \({{\tilde{A}}_1} < {{\tilde{A}}_2}\);

  3. (iii)

    If \(s({{\tilde{A}}_1}) = s({{\tilde{A}}_2})\), \(h({{\tilde{A}}_1}) > h({{\tilde{A}}_2})\), then \({{\tilde{A}}_1}\) is greater than \({{\tilde{A}}_2}\) and can be expressed as \({{\tilde{A}}_1} > {{\tilde{A}}_2}\);

  4. (iv)

    If \(s({{\tilde{A}}_1}) = s({{\tilde{A}}_2})\), \(h({{\tilde{A}}_1}) < h({{\tilde{A}}_2})\), then \({{\tilde{A}}_1}\) is less than \({{\tilde{A}}_2}\) and can be expressed as \({{\tilde{A}}_1} < {{\tilde{A}}_2}\);

  5. (v)

    If \(s({{\tilde{A}}_1}) = s({{\tilde{A}}_2})\), \(h({{\tilde{A}}_1}) = h({{\tilde{A}}_2})\), then \({{\tilde{A}}_1}\) is equal to \({{\tilde{A}}_2}\) and can be expressed as \({{\tilde{A}}_1} = {{\tilde{A}}_2}\).

Remark 4.2

Definition 4.1, Definition 4.2 gives the rules for comparing the size between two support intuitionistic fuzzy numbers.

Definition 4.3

(Zhang and Xu 2014) Let \((U,V,\tilde{\Re })\) be a multi-granular intuitionistic fuzzy approximation space on two universes, U and V be two non-empty finite universes, where \({\tilde{A}} \subseteq SIFRS(U)\), \(\tilde{\Re }= \{ {\tilde{\Re }_1},{\tilde{\Re }_2}, \cdots,{\tilde{\Re }_m}\}\) are m support intuitionistic fuzzy relations on U to V. A positive ideal solution \({\tilde{\Re }^ + }({\tilde{A}})\) and a negative ideal solution \({\tilde{\Re }^ - }({\tilde{A}})\) on a matrix \({M_{{{\tilde{\Re }}_i}}}\) can be defined respectively:

\( \begin{gathered} \tilde{\Re }^{ + } (\tilde{A}) = \{ x,\mathop {\max }\limits_{{1 \le i \le m}} < (\tilde{\Re }_{i} (\tilde{A})) >|x \in U\} = \hfill \\ \quad \{ < x_{1},\mu _{{\tilde{A}}} (x_{1} ),\nu _{{\tilde{A}}} (x_{1} ),\theta _{{\tilde{A}}} (x_{1} ) >, < x_{2},\mu _{{\tilde{A}}} (x_{2} ),\nu _{{\tilde{A}}} (x_{2} ),\theta _{{\tilde{A}}} (x_{2} ) >, \cdots, \hfill \\ \quad < x_{n},\mu _{{\tilde{A}}} (x_{n} ),\nu _{{\tilde{A}}} (x_{n} ),\theta _{{\tilde{A}}} (x_{n} ) > \}; \hfill \\ \end{gathered} \)

\( \begin{gathered} \tilde{\Re }^{ - } (\tilde{A}) = \{ x,\mathop {\min }\limits_{{1 \le i \le m}} < (\tilde{\Re }_{i} (\tilde{A})) >|x \in U\} = \hfill \\ \quad \{ < x_{1},\mu _{{\tilde{A}}} (x_{1} ),\nu _{{\tilde{A}}} (x_{1} ),\theta _{{\tilde{A}}} (x_{1} ) >, < x_{2},\mu _{{\tilde{A}}} (x_{2} ),\nu _{{\tilde{A}}} (x_{2} ),\theta _{{\tilde{A}}} (x_{2} ) >, \cdots, \hfill \\ \quad < x_{n},\mu _{{\tilde{A}}} (x_{n} ),\nu _{{\tilde{A}}} (x_{n} ),\theta _{{\tilde{A}}} (x_{n} ) > \}. \hfill \\ \end{gathered} \)

Definition 4.4

(Zhang and Xu 2014) Let \({\tilde{A}},{\tilde{B}}\) be support intuitionistic fuzzy sets on \(U = \{ {x_1},{x_2}, \cdots,{x_n}\}\) respectively, for all \(x \in U\),their similarity measure function on \({\tilde{A}}\) and \({\tilde{B}}\) can be defined as follows:

$$\begin{aligned} D({\tilde{A}}({x_i}),{\tilde{B}}({x_i})) = \frac{{{\mu _m}({x_i}) + {\nu _m}({x_i}) + {\theta _m}({x_i})}}{3}. \end{aligned}$$

where

$$\begin{aligned} {\mu _m}({x_i})= & 1 - \frac{1}{2}\sqrt{{{(\underline{{\mu _{{\tilde{A}}}}} ({x_i}) - \underline{{\mu _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\overline{{\mu _{{\tilde{A}}}}} ({x_i}) - \overline{{\mu _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\overline{{\mu _{{\tilde{A}}}}} ({x_i}) - \underline{{\mu _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\underline{{\mu _{{\tilde{A}}}}} ({x_i}) - \overline{{\mu _{\tilde{B}}}} ({x_i}))}^2}},\\ {\nu _m}({x_i})= & 1 - \frac{1}{2}\sqrt{{{(\underline{{\nu _{{\tilde{A}}}}} ({x_i}) - \underline{{\nu _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\overline{{\nu _{{\tilde{A}}}}} ({x_i}) - \overline{{\nu _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\overline{{\nu _{{\tilde{A}}}}} ({x_i}) - \underline{{\nu _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\underline{{\nu _{{\tilde{A}}}}} ({x_i}) - \overline{{\nu _{\tilde{B}}}} ({x_i}))}^2}},\\ {\theta _m}({x_i})= & 1 - \frac{1}{2}\sqrt{{{(\underline{{\theta _{{\tilde{A}}}}} ({x_i}) - \underline{{\theta _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\overline{{\theta _{{\tilde{A}}}}} ({x_i}) - \overline{{\theta _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\overline{{\theta _{{\tilde{A}}}}} ({x_i}) - \underline{{\theta _{{\tilde{B}}}}} ({x_i}))}^2} + {{(\underline{{\theta _{{\tilde{A}}}}} ({x_i}) - \overline{{\theta _{{\tilde{B}}}}} ({x_i}))}^2}}. \end{aligned}$$

Definition 4.5

(Zhang and Xu 2014) Under the three-way decision theory, \({\tilde{\Re }^ + }(\tilde{A})\) can denote the evaluation of state C and \({\tilde{\Re }^ - }({\tilde{A}})\) can denote the evaluation of state \(\sim C\). Therefore, \(\forall x \in U\) the conditional probability of x in state C can be defined as follows:

$$\begin{aligned} \Pr (C|x) = \frac{{D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ + }({\tilde{A}})} )}}{{D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ + }({\tilde{A}})} ) + D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ - }({\tilde{A}})} )}}. \end{aligned}$$

The conditional probability of x in state \(\sim C\) can be defined as follows:

$$\begin{aligned} \Pr (\sim C|x) = \frac{{D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ - }({\tilde{A}})} )}}{{D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ + }({\tilde{A}})} ) + D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ - }({\tilde{A}})} )}}. \end{aligned}$$

According to Definition 4.3, \(D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ + }({\tilde{A}})} )\) and \(D(\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i},{\Re ^ - }({\tilde{A}})} )\) denote the similarity of \({\tilde{\Re }^ + }({\tilde{A}})\) and \({\tilde{\Re }^ - }({\tilde{A}})\) with respect to the multi-granular support intuitionistic fuzzy rough sets, respectively.

Optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets based on overlap functions with three-way decision models (OOTWDandOPTWD)

The optimistic multi-granular support intuitionistic fuzzy rough set based on overlap function three-way decision model (OOTWD) and the pessimistic multi-granular support intuitionistic fuzzy rough set based on overlap function (OPTWD) are established according to the Bayesian decision process, whose loss function matrices can be given in Table 2, which consists of two states and three actions. The loss of cost caused by taking different actions in different states is expressed in the form of support intuitionistic fuzzy number, \({\lambda _{ij}} = < {\mu _{{\lambda _{ij}}}},{\nu _{{\lambda _{ij}}}},{\theta _{{\lambda _{ij}}}} > (i,j \in \{ P,B,N\} )\). Considering the practical significance of calculating the cost and the nature of support intuitionistic fuzzy, \({\lambda _{ij}}\) satisfies the following relation: \({\lambda _{PP}} \le {\lambda _{BP}} \le {\lambda _{NP}}\), \({\lambda _{NN}} \le {\lambda _{BN}} \le {\lambda _{PN}}\).

Table 2 Support intuitionistic fuzzy loss function matrix

According to the nature of support intuitionistic fuzzy numbers and the algorithm, the expected loss can be further calculated as follows:

$$ \begin{gathered} R(a_{P}|x) = \hfill \\ < 1 - (1 - \mu _{{\lambda _{{PP}} }} )^{{\Pr (C|x)}} (1 - \mu _{{\lambda _{{PN}} }} )^{{\Pr (\sim C|x)}},(\nu _{{\lambda _{{PP}} }} )^{{\Pr (C|x)}} (\nu _{{\lambda _{{PN}} }} )^{{\Pr (\sim C|x)}},1 - (1 - \theta _{{\lambda _{{PP}} }} )^{{\Pr (C|x)}} (1 - \theta _{{\lambda _{{PN}} }} )^{{\Pr (\sim C|x)}} > \hfill \\ R(a_{B}|x) = \hfill \\ < 1 - (1 - \mu _{{\lambda _{{BP}} }} )^{{\Pr (C|x)}} (1 - \mu _{{\lambda _{{BN}} }} )^{{\Pr (\sim C|x)}},(\nu _{{\lambda _{{BP}} }} )^{{\Pr (C|x)}} (\nu _{{\lambda _{{BN}} }} )^{{\Pr (\sim C|x)}},1 - (1 - \theta _{{\lambda _{{BP}} }} )^{{\Pr (C|x)}} (1 - \theta _{{\lambda _{{BN}} }} )^{{\Pr (\sim C|x)}} > \hfill \\ R(a_{N}|x) = \hfill \\ < 1 - (1 - \mu _{{\lambda _{{NP}} }} )^{{\Pr (C|x)}} (1 - \mu _{{\lambda _{{NN}} }} )^{{\Pr (\sim C|x)}},(\nu _{{\lambda _{{NP}} }} )^{{\Pr (C|x)}} (\nu _{{\lambda _{{NN}} }} )^{{\Pr (\sim C|x)}},1 - (1 - \theta _{{\lambda _{{NP}} }} )^{{\Pr (C|x)}} (1 - \theta _{{\lambda _{{NN}} }} )^{{\Pr (\sim C|x)}} > \hfill \\ \end{gathered} $$

Based on the score function and exact function given in Definition 4.1 and Definition 4.2, the following minimum cost decision rule can be constructed:

Acceptance decision rule (P2)

If \(s(R({a_P}|x)) < \min (s(R({a_B}|x)) \wedge s(R({a_N}|x)))\), then \(x \in POS(C)\);

Deferment decision rule (B2)

If \(s(R({a_B}|x)) < \min (s(R({a_P}|x)) \wedge s(R({a_N}|x)))\), then \(x \in BND(C)\);

Rejection decision rule (N2)

If \(s(R({a_N}|x)) < \min (s(R({a_P}|x)) \wedge s(R({a_B}|x)))\), then \(x \in NEG(C)\).

Remark 4.3

In Definition 4.5 similarity measure function calculation, the grouping function and the overlap function are used, and therefore the conditional probability and expected loss are based on the overlap function.

5 An example of three-way decision based on overlap function with multi-granular support for intuitionistic fuzzy rough sets

Suppose a consumer is interested in a variety of snacks and wants to choose snacks according to his financial ability and the evaluation of snacks by consumers of different ages. Three experts have constructed a consumer evaluation decision information system \((U,V,{\tilde{\Re }_i},{\tilde{A}})(i = 1,2,3)\), with a relationship matrix \({M_{{{\tilde{\Re }}_i}}}\).

Let \(U = \{ {x_1},{x_2},{x_3},{x_4},{x_5},{x_6}\}\) be 6 different age groups of consumers, where \({x_1},{x_2},{x_3},{x_4},{x_5}\) and \({x_6}\) represent consumers aged 15-17, 18-20, 21-23, 24-26, 27-29 and 30-32 years old respectively.

Let \(V = \{ {y_1},{y_2},{y_3},{y_4},{y_5},{y_6}\}\) is different types of snacks, where \({y_i}(i = 1,2, \cdots,6)\) represents candies, nuts, cakes, yogurts, potato chips, and beverages respectively.

$$\begin{aligned} M_{\tilde{R}_1}= & \left[ \begin{array}{cccccc} \langle 1.0, 0.0, 1.0 \rangle & \langle 0.4, 0.3, 0.3 \rangle & \langle 0.2, 0.8, 0.5 \rangle & \langle 0.1, 0.8, 0.2 \rangle & \langle 0.7, 0.2, 0.8 \rangle & \langle 0.6, 0.1, 0.4 \rangle \\ \langle 0.4, 0.3, 0.3 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.4, 0.5, 0.2 \rangle & \langle 0.3, 0.7, 0.1 \rangle & \langle 0.7, 0.1, 0.6 \rangle & \langle 0.7, 0.2, 0.5 \rangle \\ \langle 0.2, 0.8, 0.5 \rangle & \langle 0.4, 0.5, 0.2 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.4, 0.2, 0.6 \rangle & \langle 0.5, 0.5, 0.3 \rangle & \langle 0.6, 0.3, 0.7 \rangle \\ \langle 0.1, 0.8, 0.2 \rangle & \langle 0.3, 0.7, 0.1 \rangle & \langle 0.4, 0.2, 0.6 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.9, 0.1, 0.7 \rangle & \langle 0.8, 0.2, 0.3 \rangle \\ \langle 0.7, 0.2, 0.8 \rangle & \langle 0.7, 0.1, 0.6 \rangle & \langle 0.5, 0.5, 0.3 \rangle & \langle 0.9, 0.1, 0.7 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.3, 0.5, 0.2 \rangle \\ \langle 0.6, 0.1, 0.4 \rangle & \langle 0.7, 0.2, 0.5 \rangle & \langle 0.6, 0.3, 0.7 \rangle & \langle 0.8, 0.2, 0.3 \rangle & \langle 0.3, 0.5, 0.2 \rangle & \langle 1.0, 0.0, 1.0 \rangle \\ \end{array} \right] \\ M_{\tilde{R}_2}= & \left[ \begin{array}{cccccc} \langle 1.0, 0.0, 1.0 \rangle & \langle 0.7, 0.3, 0.6 \rangle & \langle 0.3, 0.5, 0.4 \rangle & \langle 0.2, 0.6, 0.3 \rangle & \langle 0.7, 0.2, 0.5 \rangle & \langle 0.6, 0.3, 0.2 \rangle \\ \langle 0.7, 0.3, 0.6 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.5, 0.3, 0.2 \rangle & \langle 0.3, 0.6, 0.4 \rangle & \langle 0.6, 0.4, 0.3 \rangle & \langle 0.5, 0.2, 0.7 \rangle \\ \langle 0.3, 0.5, 0.4 \rangle & \langle 0.5, 0.3, 0.2 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.5, 0.3, 0.7 \rangle & \langle 0.6, 0.2, 0.3 \rangle & \langle 0.7, 0.3, 0.5 \rangle \\ \langle 0.2, 0.6, 0.3 \rangle & \langle 0.3, 0.6, 0.4 \rangle & \langle 0.5, 0.3, 0.7 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.7, 0.1, 0.6 \rangle & \langle 0.4, 0.2, 0.3 \rangle \\ \langle 0.7, 0.2, 0.5 \rangle & \langle 0.6, 0.4, 0.3 \rangle & \langle 0.6, 0.2, 0.3 \rangle & \langle 0.7, 0.1, 0.6 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.5, 0.3, 0.4 \rangle \\ \langle 0.6, 0.3, 0.2 \rangle & \langle 0.5, 0.2, 0.7 \rangle & \langle 0.7, 0.3, 0.5 \rangle & \langle 0.4, 0.2, 0.3 \rangle & \langle 0.5, 0.3, 0.4 \rangle & \langle 1.0, 0.0, 1.0 \rangle \\ \end{array} \right] \\ M_{\tilde{R}_3}= & \left[ \begin{array}{cccccc} \langle 1.0, 0.0, 1.0 \rangle & \langle 0.2, 0.7, 0.4 \rangle & \langle 0.7, 0.1, 0.6 \rangle & \langle 0.5, 0.4, 0.3 \rangle & \langle 0.6, 0.2, 0.2 \rangle & \langle 0.2, 0.7, 0.5 \rangle \\ \langle 0.2, 0.7, 0.4 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.4, 0.3, 0.5 \rangle & \langle 0.5, 0.2, 0.6 \rangle & \langle 0.3, 0.7, 0.5 \rangle & \langle 0.7, 0.2, 0.3 \rangle \\ \langle 0.7, 0.1, 0.6 \rangle & \langle 0.4, 0.3, 0.5 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.7, 0.3, 0.4 \rangle & \langle 0.5, 0.2, 0.3 \rangle & \langle 0.1, 0.6, 0.4 \rangle \\ \langle 0.5, 0.4, 0.3 \rangle & \langle 0.5, 0.2, 0.6 \rangle & \langle 0.7, 0.3, 0.4 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.6, 0.1, 0.7 \rangle & \langle 0.7, 0.2, 0.2 \rangle \\ \langle 0.6, 0.2, 0.2 \rangle & \langle 0.3, 0.7, 0.5 \rangle & \langle 0.5, 0.2, 0.3 \rangle & \langle 0.6, 0.1, 0.7 \rangle & \langle 1.0, 0.0, 1.0 \rangle & \langle 0.3, 0.7, 0.6 \rangle \\ \langle 0.2, 0.7, 0.5 \rangle & \langle 0.7, 0.2, 0.3 \rangle & \langle 0.1, 0.6, 0.4 \rangle & \langle 0.7, 0.2, 0.2 \rangle & \langle 0.3, 0.7, 0.6 \rangle & \langle 1.0, 0.0, 1.0 \rangle \\ \end{array} \right] \end{aligned}$$

The support intuitionistic fuzzy set \( \tilde{A} = \{ < y_{1},0.5,0.2,0.7 >, < y_{2},0.3,0.6,0.2 >, \)\( < y_{3},0.4,0.4,0.7 >, < y_{4},0.6,0.2,0.4 >, < y_{5},0.7,0.2,0.3 >, < y_{6},0.5,0.4,0.6 > \} \) are used to represent the different types of snacks that consumers choose according to their own preferences. For example, in the relationship matrix \({M_{{{\tilde{\Re }}_i}}}\), \(< {\mu _{{{\tilde{\Re }}_1}}}({x_1},{y_2}),{\nu _{{{\tilde{\Re }}_1}}}({x_1},{y_2}), {\theta _{{{\tilde{\Re }}_1}}}({x_1},{y_2}) > = < 0.4,0.3,0.3>\), indicates that among consumers aged 15-17, \(40\%\) have a favorable opinion of nuts and \(30\%\) have a low favorable opinion of nuts. On the other hand, \(30\%\) think the price of nuts is acceptable.

The state set C indicates that the consumer can afford the snack and has a good evaluation of the snack, while \(\sim C\) indicates that the consumer cannot afford the snack and has a bad evaluation of the snack. \({a_P}, {a_B}, {a_N}\) in the action set denotes accepting, delaying, and refusing to buy, respectively. \({\lambda _{PP}}\), \({\lambda _{BP}}\) and \({\lambda _{NP}}\) denote the costs of accepting, delaying, and refusing to buy, respectively, when the consumer’s evaluation is good. \({\lambda _{PN}}\), \({\lambda _{BN}}\) and \({\lambda _{NN}}\) denote the costs of doing the same when the consumer’s evaluation is negative.

Where \(\Pr (C|{x_i})(i = 1,2, \cdots 6)\) denotes the probability that a consumer of age \({x_i}\) can afford and have a favorable evaluation of the snack. The experts gave the loss matrix of purchase risk for taking different actions in different states, as shown in Table 3.

Table 3 Purchase risk loss matrix for different actions in different states

5.1 Specific extraction algorithm for multi-granular support intuitionistic fuzzy rough sets based on overlap functions for three-way decision models as shown in Table 4

Remark 5.1

This algorithm mainly consists of four steps: initialization; positive and negative ideal solutions and similarity calculation; probability and risk calculation; model calculation.

Above all, the time complexity of the algorithm we proposed is \(O(k \cdot n \cdot m) \)\(+ O(k \cdot n \cdot m)+ O(k \cdot n) + O(k \cdot n \cdot m) + O(1)\), that is, \(O(k \cdot n \cdot m)\), where k, n, and m are the number of experts, consumers, and snack types, respectively.

5.2 Optimistic and pessimistic multi-granular support intuitionistic fuzzy sets for three-way decision model based on overlap function \({O_\lambda }\) and grouping function \({G_{{O_\lambda }}}\)

5.2.1 According to Definition 3.1 and Definition 3.2, the upper and lower approximations of optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets based on the overlap function \({O_\lambda }\) are computed respectively, and the results are shown as follows

See Table 4.

Table 4 Algorithm: specific extraction algorithm for three-way decision models based on overlap function
$$ \begin{gathered} \underline{{OM_{{G\sum\limits_{{i = 1}}^{m} {\tilde{\Re }_{i} } }} }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.3669,0.4517,0.4873 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.3669,0.4277,0.4676 >, \hfill \\ \quad < x_{4},0.3669,0.4277,0.3669 >, < x_{5},0.3669,0.4517,0.3 >, < x_{6},0.3,0.5281,0.4290 >, \hfill \\ \overline{{OM_{{O\sum\limits_{{i = 1}}^{m} {\tilde{\Re }_{i} } }} }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.6284,0.2,0.7 >, < x_{2},0.5531,0.2656,0.5281 >, < x_{3},0.5281,0.2,0.7 >, \hfill \\ \quad < x_{4},0.6284,0.2,0.4731 >, < x_{5},0.7,0.2,0.5531 >, < x_{6},0.5281,0.2,0.6284 > \}, \hfill \\ \underline{{PM_{{O\sum\limits_{{i = 1}}^{m} {\tilde{\Re }^{\prime}_{i} } }} }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.3270,0.5313,0.3693 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.3270,0.4676,0.4277 >, \hfill \\ \quad < x_{4},0.3270,0.4676,0.3270 >, < x_{5},0.3270,0.5313,0.3 >, < x_{6},0.3,0.5681,0.3497 > \}, \hfill \\ \overline{{PM_{{G\sum\limits_{{i = 1}}^{m} {\tilde{\Re }^{\prime}_{i} } }} }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.6684,0.2,0.7 >, < x_{2},0.6328,0.2259,0.5681 >, < x_{3},0.5681,0.2,0.7 >, \hfill \\ \quad < x_{4},0.6684,0.2,0.5918 >, < x_{5},0.7,0.2,0.6328 >,{\text{ < }}x_{6},0.5681,0.2,0.6684 > \}. \hfill \\ \end{gathered} $$

5.2.2 Combined with the relation matrix \({M_{{{\tilde{\Re }}_i}}}(i = 1,2,3)\), according to Definition 4.1 and Definition 4.4, the positive ideal solution \({\tilde{\Re }^ + }({\tilde{A}})\) and the negative ideal solution \({\tilde{\Re }^ - }({\tilde{A}})\) are computed respectively, and the results are shown as follows:

$$ \begin{gathered} \underline{{\tilde{\Re }^{ + } }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.4,0.4,0.6 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.4,0.4,0.5 >, \hfill \\ \quad < x_{4},0.4,0.4,0.4 >, < x_{5},0.4,0.4,0.3 >, < x_{6},0.3,0.6,0.5 > \} \hfill \\ \overline{{\tilde{\Re }^{ + } }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.7,0.2,0.7 >, < x_{2},0.7,0.2,0.5 >, < x_{3},0.6,0.2,0.7 >, \hfill \\ \quad < x_{4},0.7,0.2,0.7 >, < x_{5},0.7,0.2,0.7 >, < x_{6},0.6,0.2,0.7 > \} \hfill \\ \underline{{\tilde{\Re }^{ - } }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.3,0.6,0.4 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.3,0.5,0.4 >, \hfill \\ \quad < x_{4},0.3,0.5,0.3 >, < x_{5},0.3,0.6,0.3 >, < x_{6},0.3,0.5,0.3 > \}, \hfill \\ \overline{{\tilde{\Re }^{ - } }} (\tilde{A}) = \hfill \\ \quad \{ < x_{1},0.6,0.2,0.7 >, < x_{2},0.5,0.2,0.5 >, < x_{3},0.5,0.2,0.7 >, \hfill \\ \quad < x_{4},0.6,0.2,0.4 >, < x_{5},0.7,0.2,0.5 >, < x_{6},0.5,0.2,0.6 > \}. \hfill \\ \end{gathered} $$

5.2.3 Construct \({O_\lambda }OTWD\)

(i) the similarity measure function \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) and \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) are calculated with respect to \({\tilde{\Re }^ + }({\tilde{A}})\) and \({\tilde{\Re }^ - }({\tilde{A}})\) respectively, and the results are shown as follows:

$$\begin{aligned} & D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}})) = \{ 0.8337,0.7556,0.8516,0.8345, 0.7882,0.7783\},\\ & D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ - }({\tilde{A}})) = \{ 0.7862,0.7837,0.8262,0.8342, 0.7755,0.8095\}. \end{aligned}$$

(ii) Then, \(\Pr (C|{x_i})\) and \(\Pr (\sim C|{x_i})\) were further calculated and the results are shown in Table 5:

Table 5 Conditional probabilities of \({O_\lambda }OTWD\)

(iii) In conjunction with Definition 4.5, the expected risk loss \(R({a_*}|x)(* \in \{ P,B,N\},\)\(i = 1,2, \cdots,6)\) under the three actions is calculated as follows:

$$ \begin{gathered} R(a_{P}|x_{1} ) = < 0.5671,0.4075,0.5365 >,R(a_{B}|x_{1} ) = < 0.5514,0.4486,0.5514 >, \hfill \\ \quad R(a_{N}|x_{1} ) = < 0.6074,0.3679,0.4880 > \hfill \\ R(a_{P}|x_{2} ) = < 0.5815,0.3950,0.5453 >,R(a_{B}|x_{2} ) = < 0.5537,0.4463,0.5537 >, \hfill \\ \quad R(a_{N}|x_{2} ) = < 0.5949,0.3785,0.4752 > \hfill \\ R(a_{P}|x_{3} ) = < 0.5709,0.4042,0.5388 >,R(a_{B}|x_{3} ) = < 0.5520,0.4480,0.5520 >, \hfill \\ \quad R(a_{N}|x_{3} ) = < 0.6042,0.3706,0.4847 > \hfill \\ R(a_{P}|x_{4} ) = < 0.5757,0.4001,0.5417 >,R(a_{B}|x_{4} ) = < 0.5528,0.4472,0.5528 >, \hfill \\ \quad R(a_{N}|x_{4} ) = < 0.6001,0.3741,0.4804 > \hfill \\ R(a_{P}|x_{5} ) = < 0.5731,0.4023,0.5401 >,R(a_{B}|x_{5} ) = < 0.5524,0.4476,0.5524 >, \hfill \\ \quad R(a_{N}|x_{5} ) = < 0.6023,0.3722,0.4827 > \hfill \\ R(a_{P}|x_{6} ) = < 0.5819,0.3946,0.5455 >,R(a_{B}|x_{6} ) = < 0.5538,0.4462,0.5538 >, \hfill \\ \quad R(a_{N}|x_{6} ) = < 0.5945,0.3788,0.4748 > \hfill \\ \end{gathered} $$

(iv) Combined with Definition 4.1, the score function is calculated as follows:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1154,s(R({a_B}|{x_1})) = 0.1029,s(R({a_N}|{x_1})) = 0.1080,\\ & s(R({a_P}|{x_2})) = 0.1375,s(R({a_B}|{x_2})) = 0.1074,s(R({a_N}|{x_2})) = 0.0841,\\ & s(R({a_P}|{x_3})) = 0.1211,s(R({a_B}|{x_3})) = 0.1041,s(R({a_N}|{x_3})) = 0.1019,\\ & s(R({a_P}|{x_4})) = 0.1285,s(R({a_B}|{x_4})) = 0.1056,s(R({a_N}|{x_4})) = 0.0939,\\ & s(R({a_P}|{x_5})) = 0.1246,s(R({a_B}|{x_5})) = 0.1048,s(R({a_N}|{x_5})) = 0.0982,\\ & s(R({a_P}|{x_6})) = 0.1381,s(R({a_B}|{x_6})) = 0.1075,s(R({a_N}|{x_6})) = 0.0833. \end{aligned}$$

(v) The histogram of the score function is shown in Fig. 1:

Fig. 1
figure 1

Score function for \({O_\lambda }OTWD\)

According to the score function and the three-way decision rule of Definition 4.5, the decision results of optimistic multi-granular support intuitionistic fuzzy rough sets based on overlap function are obtained as follows: \(BND(C) = \{ {x_1}\},\)\( NEG(C) = \{ {x_2},{x_3},{x_4},{x_5},{x_6}\}\).

Customers in the age group of 18-20, 21-23, 24-26, 27-29 and 30-32 refused to buy snacks and customers in the age group of 15-17 were uncertain about their attitude towards snack purchases and further research is needed.

5.2.4 Construct \({O_\lambda }PTWD\)

(i) The similarity measure function \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) and \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) are calculated with respect to \({\tilde{\Re }^ + }({\tilde{A}})\) and \({\tilde{\Re }^ - }({\tilde{A}})\) respectively, and the results are shown as follows:

\(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}})) = \{ 0.7849,0.7412,0.8305,0.8085, 0.7637, 0.7564\},\)

\(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ - }({\tilde{A}})) = \{ 0.7607,0.7603,0.8105,0.7958, 0.7530,0.7876\}.\)

(ii) Then, \(\Pr (C|{x_i})\) and \(\Pr (\sim C|{x_i})\) were further calculated and the results are shown in Table 6:

Table 6 Conditional probabilities of \({O_\lambda }PTWD\)

(iii) In conjunction with Definition 4.5, the expected risk loss \(R({a_*}|x)(* \in \{ P,B,N\},\)\(i = 1,2, \cdots,6)\) under the three actions is calculated as follows:

$$ \begin{gathered} R(a_{P}|x_{1} ) = < 0.5707,0.4043,0.5387 >,R(a_{B}|x_{1} ) = < 0.5520,0.4480,0.5520 >, \hfill \\ \quad R(a_{N}|x_{1} ) = < 0.6043,0.3705,0.4848 > \hfill \\ R(a_{P}|x_{2} ) = < 0.5798,0.3965,0.5442 >,R(a_{B}|x_{2} ) = < 0.5534,0.4466,0.5534 >, \hfill \\ \quad R(a_{N}|x_{2} ) = < 0.5964,0.3773,0.4767 > \hfill \\ R(a_{P}|x_{3} ) = < 0.5718,0.4034,0.5394 >,R(a_{B}|x_{3} ) = < 0.5522,0.4478,0.5522 >, \hfill \\ \quad R(a_{N}|x_{3} ) = < 0.6034,0.3713,0.4839 > \hfill \\ R(a_{P}|x_{4} ) = < 0.5732,0.4022,0.5402 >,R(a_{B}|x_{4} ) = < 0.5524,0.4476,0.5524 >, \hfill \\ \quad R(a_{N}|x_{4} ) = < 0.6022,0.3723,0.4827 > \hfill \\ R(a_{P}|x_{5} ) = < 0.5735,0.4019,0.5404 >,R(a_{B}|x_{5} ) = < 0.5524,0.4476,0.5524 >, \hfill \\ \quad R(a_{N}|x_{5} ) = < 0.6019,0.3725,0.4824 > \hfill \\ R(a_{P}|x_{6} ) = < 0.5821,0.3944,0.5456 >,R(a_{B}|x_{6} ) = < 0.5538,0.4462,0.5538 >, \hfill \\ \quad R(a_{N}|x_{6} ) = < 0.5944,0.3789,0.4746 > \hfill \\ \end{gathered} $$

(iv) Combined with Definition 4.1, the score function is calculated as follows:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1209,s(R({a_B}|{x_1})) = 0.1040,s(R({a_N}|{x_1})) = 0.1021,\\ & s(R({a_P}|{x_2})) = 0.1348,s(R({a_B}|{x_2})) = 0.1068,s(R({a_N}|{x_2})) = 0.0870,\\ & s(R({a_P}|{x_3})) = 0.1226,s(R({a_B}|{x_3})) = 0.1044,s(R({a_N}|{x_3})) = 0.1003,\\ & s(R({a_P}|{x_4})) = 0.1247,s(R({a_B}|{x_4})) = 0.1048,s(R({a_N}|{x_4})) = 0.0981,\\ & s(R({a_P}|{x_5})) = 0.1252,s(R({a_B}|{x_5})) = 0.1049,s(R({a_N}|{x_5})) = 0.0975,\\ & s(R({a_P}|{x_6})) = 0.1384,s(R({a_B}|{x_6})) = 0.1076,s(R({a_N}|{x_6})) = 0.0830. \end{aligned}$$

(v) The histogram of the score function is shown in Fig. 2:

Fig. 2
figure 2

Score function for \({O_\lambda }PTWD\)

The results of the decision are as follows: \(NEG(C) = \{ {x_1},{x_2},{x_3},{x_4},{x_5},{x_6}\}.\)

Customers in the age groups 15-17, 18-20, 21-23, 24-26, 27-29 and 30-32 refused to buy snacks.

5.3 Optimistic and pessimistic multi-granular support intuitionistic fuzzy sets for three-way decision model based on overlap function \({O_{D{B_n}}}\) and grouping function \({G_{{O_{D{B_n}}}}}\)

5.3.1 According to Definition 3.1 and Definition 3.2, the upper and lower approximations of optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets based on the overlap function \({O_{D{B_n}}}\) are computed respectively, and the results are shown as follows:

$$ \begin{gathered} \underline{{OM_{{G\sum\limits_{{i = 1}}^{m} {\tilde{\Re }_{i} } }} }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.3359,0.4536,0.4555 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.3692,0.4297,0.4364 >, \hfill \\ < x_{4},0.3692,0.4297,0.3359 >, < x_{5},0.3692,0.5196,0.3 >, < x_{6},0.3,0.5636,0.4084 > \}, \hfill \\ \overline{{OM_{{O\sum\limits_{{i = 1}}^{m} {\tilde{\Re }_{i} } }} }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.6641,0.2,0.7 >, < x_{2},0.5916,0.2356,0.5303 >, < x_{3},0.5636,0.2,0.7 >, \hfill \\ < x_{4},0.6641,0.2,0.5445 >, < x_{5},0.7,0.2,0.5916 >, < x_{6},0.5636,0.2,0.6308 > \}, \hfill \\ \underline{{PM_{{O\sum\limits_{{i = 1}}^{m} {\tilde{\Re }^{\prime}_{i} } }} }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.3286,0.4804,0.4076 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.3618,0.4364,0.4297 >, \hfill \\ < x_{4},0.3618,0.4364,0.3286 >, < x_{5},0.3618,0.5464,0.3 >, < x_{6},0.3,0.5703,0.3873 > \}, \hfill \\ \overline{{PM_{{G\sum\limits_{{i = 1}}^{m} {\tilde{\Re }^{\prime}_{i} } }} }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.6714,0.2,0.7 >, < x_{2},0.6127,0.2268,0.5371 >, < x_{3},0.5703,0.2,0.7 >, \hfill \\ < x_{4},0.6714,0.2,0.5924 >, < x_{5},0.7,0.2,0.6127 >, < x_{6},0.5703,0.2,0.6205 > \}. \hfill \\ \end{gathered} $$

5.3.2 Combined with the relation matrix \({M_{{{\tilde{\Re }}_i}}}(i = 1,2,3)\), according to Definition 4.1 and Definition 4.4, the positive ideal solution \({\tilde{\Re }^ + }({\tilde{A}})\) and the negative ideal solution \({\tilde{\Re }^ - }({\tilde{A}})\) are computed respectively, and the results are shown as follows:

$$ \begin{gathered} \underline{{\tilde{\Re }^{ + } }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.4,0.4,0.6 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.4,0.4,0.5 >, \hfill \\ < x_{4},0.4,0.4,0.4 >, < x_{5},0.4,0.4,0.3 >, < x_{6},0.3,0.6,0.5 > \}, \hfill \\ \overline{{\tilde{\Re }^{ + } }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.7,0.2,0.7 >, < x_{2},0.7,0.2,0.5 >, < x_{3},0.6,0.2,0.7 >, \hfill \\ < x_{4},0.7,0.2,0.7 >, < x_{5},0.7,0.2,0.7 >, < x_{6},0.6,0.2,0.7 > \}, \hfill \\ \underline{{\tilde{\Re }^{ - } }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.3,0.6,0.4 >, < x_{2},0.3,0.6,0.2 >, < x_{3},0.3,0.5,0.4 >, \hfill \\ < x_{4},0.3,0.5,0.3 >, < x_{5},0.3,0.6,0.3 >, < x_{6},0.3,0.5,0.3 > \}, \hfill \\ \overline{{\tilde{\Re }^{ - } }} (\tilde{A}) = \hfill \\ \{ < x_{1},0.6,0.2,0.7 >, < x_{2},0.5,0.2,0.5 >, < x_{3},0.5,0.2,0.7 >, \hfill \\ < x_{4},0.6,0.2,0.4 >, < x_{5},0.7,0.2,0.5 >, < x_{6},0.5,0.2,0.6 > \}. \hfill \\ \end{gathered} $$

5.3.3 Construct \({O_{D{B_n}}}OTWD\)

(i) The similarity measure function \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) and \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) are calculated with respect to \({\tilde{\Re }^ + }({\tilde{A}})\) and \({\tilde{\Re }^ - }({\tilde{A}})\) respectively, and the results are shown as follows:

\(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}})) = \{ 0.8193,0.7506,0.8450, 0.8259, 0.7750, 0.7701\},\)

\(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ - }({\tilde{A}})) = \{ 0.7763,0.7742,0.8176, 0.8135, 0.7655,0.7981\}.\)

(ii) Then, \(\Pr (C|{x_i})\) and \(\Pr (\sim C|{x_i})\) were further calculated and the results are shown in Table 7:

Table 7 Conditional probabilities of \({O_{D{B_n}}}OTWD\)

(iii) In conjunction with Definition 4.5, the expected risk loss \(R({a_*}|x)(* \in \{ P,B,N\},\)\(i = 1,2, \cdots,6)\) under the three actions is calculated as follows:

$$ \begin{gathered} R(a_{P}|x_{1} ) = < 0.5670,0.4076,0.5365 >,R(a_{B}|x_{1} ) = < 0.5514,0.4486,0.5514 >, \hfill \\ \quad R(a_{N}|x_{1} ) = < 0.6074,0.3679,0.4880 > \hfill \\ R(a_{P}|x_{2} ) = < 0.5806,0.3958,0.5447 >,R(a_{B}|x_{2} ) = < 0.5536,0.4464,0.5536 >, \hfill \\ \quad R(a_{N}|x_{2} ) = < 0.5957,0.3778,0.4740 > \hfill \\ R(a_{P}|x_{3} ) = < 0.5705,0.4046,0.5385 >,R(a_{B}|x_{3} ) = < 0.5520,0.4480,0.5520 >, \hfill \\ \quad R(a_{N}|x_{3} ) = < 0.6045,0.3703,0.4850 > \hfill \\ R(a_{P}|x_{4} ) = < 0.5733,0.4021,0.5403 >,R(a_{B}|x_{4} ) = < 0.5524,0.4476,0.5524 >, \hfill \\ \quad R(a_{N}|x_{4} ) = < 0.6021,0.3724,0.4825 > \hfill \\ R(a_{P}|x_{5} ) = < 0.5738,0.4017,0.5405 >,R(a_{B}|x_{5} ) = < 0.5525,0.4475,0.5525 >, \hfill \\ \quad R(a_{N}|x_{5} ) = < 0.6017,0.3727,0.4822 > \hfill \\ R(a_{P}|x_{6} ) = < 0.5814,0.3951,0.5452 >,R(a_{B}|x_{6} ) = < 0.5537,0.4463,0.5537 >, \hfill \\ \quad R(a_{N}|x_{6} ) = < 0.5950,0.3784,0.4753 > \hfill \\ \end{gathered} $$

(iv) Combined with Definition 4.1, the score function is calculated as follows:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1153,s(R({a_B}|{x_1})) = 0.1029,s(R({a_N}|{x_1})) = 0.1081,\\ & s(R({a_P}|{x_2})) = 0.1361,s(R({a_B}|{x_2})) = 0.1071,s(R({a_N}|{x_2})) = 0.0856,\\ & s(R({a_P}|{x_3})) = 0.1205,s(R({a_B}|{x_3})) = 0.1039,s(R({a_N}|{x_3})) = 0.1025,\\ & s(R({a_P}|{x_4})) = 0.1249,s(R({a_B}|{x_4})) = 0.1048,s(R({a_N}|{x_4})) = 0.0979,\\ & s(R({a_P}|{x_5})) = 0.1256,s(R({a_B}|{x_5})) = 0.1050,s(R({a_N}|{x_5})) = 0.0971,\\ & s(R({a_P}|{x_6})) = 0.1373,s(R({a_B}|{x_6})) = 0.1073,s(R({a_N}|{x_6})) = 0.0843. \end{aligned}$$

(v) The histogram of the score function is shown in Fig. 3:

Fig. 3
figure 3

Score function for \({O_{D{B_n}}}OTWD\)

According to the score function and the three-way decision rule of \({O_{D{B_n}}}\) Definition 4.5, the decision results of are obtained as follows: \(BND(C) = \{ {x_1}\}\), \(NEG(C) = \{ {x_2},{x_3},{x_4},{x_5},{x_6}\}\).

Customers in the age group of 18-20, 21-23, 24-26, 27-29 and 30-32 refused to buy snacks and customers in the age group of 15-17 were uncertain about their attitude towards snack purchases and further research is needed.

5.3.4 Construct \({O_{D{B_n}}}PTWD\)

(i) The similarity measure function \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) and \(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}}))\) are calculated with respect to \({\tilde{\Re }^ + }({\tilde{A}})\) and \({\tilde{\Re }^ - }({\tilde{A}})\) respectively, and the results are shown as follows:

\(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ + }({\tilde{A}})) = \{ 0.8027,0.7474,0.8413, 0.8186,0.7675, 0.7653\},\)

\(D(O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}({\tilde{A}}){\Re ^ - }({\tilde{A}})) = \{ 0.7686,0.7686,0.8750, 0.8005,0.7588,0.7966\}.\)

(ii) Then, \(\Pr (C|{x_i})\) and \(\Pr (\sim C|{x_i})\) were further calculated and the results are shown in Table 8:

Table 8 Conditional probabilities of \({O_{D{B_n}}}PTWD\)

(iii) In conjunction with Definition 4.5, the expected risk loss \(R({a_*}|x)(* \in \{ P,B,N\},i = 1,2, \cdots,6)\) under the three actions is calculated as follows:

$$ \begin{gathered} R(a_{P}|x_{1} ) = < 0.5687,0.4061,0.5375 >,R(a_{B}|x_{1} ) = < 0.5517,0.4483,0.5517 >, \hfill \\ \quad R(a_{N}|x_{1} ) = < 0.6060,0.3691,0.4866 > \hfill \\ R(a_{P}|x_{2} ) = < 0.5802,0.3961,0.5445 >,R(a_{B}|x_{2} ) = < 0.5535,0.4465,0.5535 >, \hfill \\ \quad R(a_{N}|x_{2} ) = < 0.5961,0.3775,0.4764 > \hfill \\ R(a_{P}|x_{3} ) = < 0.5707,0.4044,0.5387 >,R(a_{B}|x_{3} ) = < 0.5520,0.4480,0.5520 >, \hfill \\ \quad R(a_{N}|x_{3} ) = < 0.6044,0.3705,0.4849 > \hfill \\ R(a_{P}|x_{4} ) = < 0.5721,0.4031,0.5396 >,R(a_{B}|x_{4} ) = < 0.5522,0.4478,0.5522 >, \hfill \\ \quad R(a_{N}|x_{4} ) = < 0.6031,0.3715,0.4836 > \hfill \\ R(a_{P}|x_{5} ) = < 0.5739,0.4016,0.5406 >,R(a_{B}|x_{5} ) = < 0.5525,0.4475,0.5525 >, \hfill \\ \quad R(a_{N}|x_{5} ) = < 0.6016,0.3728,0.4828 > \hfill \\ R(a_{P}|x_{6} ) = < 0.5821,0.3945,0.5456 >,R(a_{B}|x_{6} ) = < 0.5538,0.4462,0.5538 >, \hfill \\ \quad R(a_{N}|x_{6} ) = < 0.5944,0.3789,0.4746 > \hfill \\ \end{gathered} $$

(iv) Combined with Definition 4.1, the score function is calculated as follows:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1179,s(R({a_B}|{x_1})) = 0.1034,s(R({a_N}|{x_1})) = 0.1054,\\ & s(R({a_P}|{x_2})) = 0.1354,s(R({a_B}|{x_2})) = 0.1070,s(R({a_N}|{x_2})) = 0.0863,\\ & s(R({a_P}|{x_3})) = 0.1208,s(R({a_B}|{x_3})) = 0.1040,s(R({a_N}|{x_3})) = 0.1022,\\ & s(R({a_P}|{x_4})) = 0.1231,s(R({a_B}|{x_4})) = 0.1045,s(R({a_N}|{x_4})) = 0.0998,\\ & s(R({a_P}|{x_5})) = 0.1258,s(R({a_B}|{x_5})) = 0.1050,s(R({a_N}|{x_5})) = 0.0969,\\ & s(R({a_P}|{x_6})) = 0.1383,s(R({a_B}|{x_6})) = 0.1076,s(R({a_N}|{x_6})) = 0.0831. \end{aligned}$$

(v) The histogram of the score function is shown in Fig. 4:

Fig. 4
figure 4

Score function for \({O_{D{B_n}}}PTWD\)

The decision results obtained as follows: \(BND(C) = \{ {x_1}\}, NEG(C) = \{ {x_2},{x_3}\)\( ,{x_4},{x_5},{x_6}\}\).

Customers in the age group of 18-20, 21-23, 24-26, 27-29 and 30-32 refused to buy snacks and customers in the age group of 15-17 were uncertain about their attitude towards snack purchases and further research is needed.

5.4 Comparative studies

The proposed model is compared with two traditional models: the t-norm-based multi-granular support intuitionistic fuzzy rough set model and the t-norm-based multi-granular intuitionistic fuzzy rough set model. These traditional models were chosen due to their foundational status in fuzzy rough set theory and their widespread use in multi-attribute decision-making scenarios. The comparative analysis focuses on three key aspects: score function distribution, boundary domain and rejection domain, and conditional probability stability. In addition, computational efficiency and limitations are discussed to provide a more comprehensive evaluation.

5.4.1 Score function distribution

(i) Score function results and histograms with the t-norm-based multi-granular support intuitionistic fuzzy rough set model

Score function for the optimistic case:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1067,s(R({a_B}|{x_1})) = 0.1012,s(R({a_N}|{x_1})) = 0.0735,\\ & s(R({a_P}|{x_2})) = 0.1394,s(R({a_B}|{x_2})) = 0.1078,s(R({a_N}|{x_2})) = 0.0535,\\ & s(R({a_P}|{x_3})) = 0.1197,s(R({a_B}|{x_3})) = 0.1038,s(R({a_N}|{x_3})) = 0.1034,\\ & s(R({a_P}|{x_4})) = 0.1399,s(R({a_B}|{x_4})) = 0.1080,s(R({a_N}|{x_4})) = 0.0810,\\ & s(R({a_P}|{x_5})) = 0.1242,s(R({a_B}|{x_5})) = 0.1046,s(R({a_N}|{x_5})) = 0.0985,\\ & s(R({a_P}|{x_6})) = 0.1310,s(R({a_B}|{x_6})) = 0.1069,s(R({a_N}|{x_6})) = 0.0912. \end{aligned}$$

The histogram of the score function is shown in Figure5:

Fig. 5
figure 5

Score function of optimistic multi-granular support intuitionistic fuzzy rough set expectation loss

The optimistic multi-granular support intuitionistic fuzzy rough set decision results are as follows: \(NEG(C) = \{ {x_1},{x_2},{x_3},{x_4},{x_5},{x_6}\}\).

Customers in the age groups 15-17, 18-20, 21-23, 24-26, 27-29 and 30-32 refused to buy snacks.

Score function for the pessimistic case:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1176,s(R({a_B}|{x_1})) = 0.1048,s(R({a_N}|{x_1})) = 0.0978,\\ & s(R({a_P}|{x_2})) = 0.1432,s(R({a_B}|{x_2})) = 0.1060,s(R({a_N}|{x_2})) = 0.0892,\\ & s(R({a_P}|{x_3})) = 0.1235,s(R({a_B}|{x_3})) = 0.1046,s(R({a_N}|{x_3})) = 0.0993,\\ & s(R({a_P}|{x_4})) = 0.1260,s(R({a_B}|{x_4})) = 0.1050,s(R({a_N}|{x_4})) = 0.0967,\\ & s(R({a_P}|{x_5})) = 0.1255,s(R({a_B}|{x_5})) = 0.1050,s(R({a_N}|{x_5})) = 0.0977,\\ & s(R({a_P}|{x_6})) = 0.1390,s(R({a_B}|{x_6})) = 0.1078,s(R({a_N}|{x_6})) = 0.0823. \end{aligned}$$

The histogram of the score function is shown in Figure6:

Fig. 6
figure 6

Score function of pessimistic multi-granular support intuitionistic fuzzy rough set expectation loss

Then the results are as follows: \(NEG(C) = \{ {x_1},{x_2},{x_3},{x_4},{x_5},{x_6}\}\).

Customers in the age groups 15-17, 18-20, 21-23, 24-26, 27-29 and 30-32 refused to buy snacks.

(ii) Score function results and histograms with the t-norm-based multi-granular intuitionistic fuzzy rough set model

Score function for the optimistic case:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1497,s(R({a_B}|{x_1})) = 0.1011,s(R({a_N}|{x_1})) = 0.2478,\\ & s(R({a_P}|{x_2})) = 0.2006,s(R({a_B}|{x_2})) = 0.1178,s(R({a_N}|{x_2})) = 0.1977,\\ & s(R({a_P}|{x_3})) = 0.1577,s(R({a_B}|{x_3})) = 0.1011,s(R({a_N}|{x_3})) = 0.2434,\\ & s(R({a_P}|{x_4})) = 0.1599,s(R({a_B}|{x_4})) = 0.1011,s(R({a_N}|{x_4})) = 0.2410,\\ & s(R({a_P}|{x_5})) = 0.1342,s(R({a_B}|{x_5})) = 0.1011,s(R({a_N}|{x_5})) = 0.2685,\\ & s(R({a_P}|{x_6})) = 0.1810,s(R({a_B}|{x_6})) = 0.1069,s(R({a_N}|{x_6})) = 0.2312. \end{aligned}$$

The histogram of the score function is shown in Figure7:

Fig. 7
figure 7

Score function of optimistic multi-granular intuitionistic fuzzy rough set expectation loss

Then the optimistic multi-granular support intuitionistic fuzzy rough set decision results are as follows: \(BND(C) = \{ {x_1},{x_2}, {x_3},{x_4},{x_5},{x_6}\}\).

Customers in the 15-17, 18-20, 21-23, 24-26, 27-29 and 30-32 age groups need more information or time to decide whether or not to buy this snack.

Score function for the pessimistic case:

$$\begin{aligned} & s(R({a_P}|{x_1})) = 0.1616,s(R({a_B}|{x_1})) = 0.1011,s(R({a_N}|{x_1})) = 0.2378,\\ & s(R({a_P}|{x_2})) = 0.1832,s(R({a_B}|{x_2})) = 0.1011,s(R({a_N}|{x_2})) = 0.2252,\\ & s(R({a_P}|{x_3})) = 0.1655,s(R({a_B}|{x_3})) = 0.1011,s(R({a_N}|{x_3})) = 0.2333,\\ & s(R({a_P}|{x_4})) = 0.1260,s(R({a_B}|{x_4})) = 0.1011,s(R({a_N}|{x_4})) = 0.2407,\\ & s(R({a_P}|{x_5})) = 0.1555,s(R({a_B}|{x_5})) = 0.1011,s(R({a_N}|{x_5})) = 0.2477,\\ & s(R({a_P}|{x_6})) = 0.1790,s(R({a_B}|{x_6})) = 0.1011,s(R({a_N}|{x_6})) = 0.2423. \end{aligned}$$

The histogram of the score function is shown in Figure8:

Fig. 8
figure 8

Score function of pessimistic multi-granular intuitionistic fuzzy rough set expectation loss

The pessimistic multi-granular support intuitionistic fuzzy rough set decision results are as follows: \(BND(C) = \{ {x_1},{x_2}, {x_3},{x_4},{x_5},{x_6}\}\).

Customers in the 15-17, 18-20, 21-23, 24-26, 27-29 and 30-32 age groups need more information or time to decide whether or not to buy this snack.

From Figs. 1, 2, 3, 4, 5, 6, 7, 8, it can be seen that the proposed model achieves a more concentrated score function distribution compared to the traditional models. This concentration leads to narrower decision boundary regions and improved decision accuracy, effectively reducing uncertainty in decision-making. By integrating overlap and grouping functions, the proposed model demonstrates greater robustness in handling conflicting information in multi-attribute decision scenarios. While the additional complexity introduced by overlap and grouping functions slightly increases computational time compared to traditional models, the overall efficiency remains competitive, ensuring practicality in real-world applications.

5.4.2 Boundary domain and rejection domain

Fig. 9
figure 9

Analysis of decision results for the boundary domain as well as the rejection domain

From Fig. 9, it shows that the proposed model reduces the boundary domain size in both optimistic and pessimistic cases, refining decision regions and mitigating uncertainty to enhance precision. The expanded rejection domain further improves the handling of ambiguous or conflicting information, demonstrating the model’s adaptability to complex scenarios. While overlap functions introduce computational overhead, the model maintains a balance between precision and scalability, ensuring practical feasibility.

5.4.3 Conditional probability stability

Fig. 10
figure 10

Analysis of decision results for conditional probability

From Fig. 10, it can be seen that Fig. 10 illustrates that the proposed model achieves a more stable and concentrated conditional probability distribution compared to the traditional models. This improvement ensures higher consistency and reliability in decision outcomes, particularly in scenarios characterized by overlap attributes or multi-attribute dependencies. While this process increases computational demands, the results show that the trade-off between accuracy and computational efficiency is favorable, particularly in scenarios where precision and reliability are critical.

The proposed model demonstrates significant improvements in decision accuracy, robustness, and adaptability compared to traditional models. However, it is not without limitations. The integration of overlap and grouping functions introduces additional computational complexity, which may pose challenges in large-scale or resource-constrained environments. Furthermore, the performance of the model is sensitive to the selection of overlap and grouping functions, requiring careful parameter tuning to achieve optimal results. Future work will focus on optimizing these parameters and exploring methods to further reduce computational costs while maintaining performance.

Through these comparative analyses, the proposed model consistently outperforms traditional models in key areas such as decision boundary precision, adaptability to complex scenarios, and decision reliability. While introducing a manageable computational overhead, the integration of overlap and grouping functions significantly enhances the model’s capability to address the limitations of traditional fuzzy rough set models. These findings demonstrate the practical applicability of the proposed model in diverse multi-attribute decision-making contexts.

6 Conclusion and future work

This paper aims to develop a new three-way decision model based on multi-granular support intuitionistic fuzzy rough sets using n-dimensional overlap functions and n-dimensional grouping functions, thereby effectively addressing complex multi-attribute decision-making problems. The main results are as follows:

  1. (i)

    Construction of the multi-granular support intuitionistic fuzzy rough set model: Utilizing n-dimensional overlap functions and n-dimensional grouping functions, optimistic and pessimistic multi-granular support intuitionistic fuzzy rough set models based on overlap functions have been established. These models not only retain the core properties of classical multi-granular support intuitionistic fuzzy rough sets but also introduce several new properties. For example, the inequation \(\underline{O{M_{G\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}}) \subseteq \underline{O{M_{\sum \limits _{i = 1}^m {{{\tilde{\Re }}_i}} }}} ({\tilde{A}})\) holds when overlap functions and grouping functions are idempotent overlap functions; however, this relationship no longer holds when the overlap function O is a 1-distribution expansion and the grouping function G is a 0-distribution tight.

  2. (ii)

    Optimization of the three-way decision models: Based on optimistic and pessimistic multi-granular support intuitionistic fuzzy rough sets, this paper presents two novel three-way decision models. Specifically, these models focus on overlap functions to establish optimistic and pessimistic three-way decision models within the framework of support intuitionistic fuzzy sets.

  3. (iii)

    Empirical research on consumer decision-making problems: By designing a consumer evaluation ranking algorithm and applying it to empirical analysis, this study demonstrates that the established three-way decision models can effectively narrow the boundary region of the decision-making model.

  4. (iv)

    Research limitations: Existing fuzzy rough set models have limitations in handling multi-granular perspectives and overlap factors, reducing their effectiveness in complex decision-making. The proposed model addresses these issues by incorporating n-dimensional overlap and grouping functions but still faces challenges. Its computational complexity may hinder real-time applications, and parameter tuning affects performance consistency. Furthermore, this study focuses on consumer decision-making, leaving its applications in other fields unexplored.

  5. (v)

    Future research directions: Future work can focus on optimizing the model by developing more efficient algorithms to reduce complexity and automating parameter tuning for greater robustness. Expanding its use to fields such as healthcare, where decisions balance cost and effectiveness, or environmental management, where sustainability and resource allocation are key, could demonstrate its versatility. Integrating machine learning could further enhance adaptability and scalability for large-scale dynamic decision-making.

In summary, this paper proposes a novel three-way decision model based on multi-granular support intuitionistic fuzzy rough sets using n-dimensional overlap and grouping functions, effectively addressing complex multi-attribute decision-making problems. The proposed models reduce decision uncertainty and narrow boundary regions while providing theoretical advancements and practical tools for solving real-world decision-making problems under uncertainty.