Skip to main content
Log in

Case-based decision model matches ideal point model:

Application to marketing decision support system

  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

This paper studies the relationship between a case-based decision theory (CBDT) and an ideal point model (IPM). We show that a case-based decision model (CBDM) can be transformed into an IPM under some assumptions. This transformation can allow us to visualize the relationship among data and simplify the calculations of distance between one current datum and the ideal point, rather than the distances between data. Our results will assist researchers with their product design analysis and positioning of goods through CBDT, by revealing past dependences or providing a reference point. Furthermore, to check whether the similarity function, presented in the theoretical part, is valid for empirical analysis, we use data on the viewing behavior of audiences of TV dramas in Japan and compare the estimation results under the CBDM that corresponds to a standard decision model with similarities and other various similarity functions and without a similarity function. Our empirical analysis shows that the CBDM with a similarity function, presented in this study, best fits the data.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. For pioneering studies on CBR, see Schank (1982) and Kolodner (1984). For a more recent work, see Aha et al. (2005). Many researchers have developed expert systems with CBR, and an insightful survey by Sørmo et al. (2005), who gave an overview of different theories of explanation in CBR, classified explanation in CBR into nine types and offered some applications, such as displaying similar cases, visualization, and explanation models, among others.

  2. According to Gilboa and Schmeidler (1995), “the concept of similarity is the main engine of the decision models.” For more details, see Quine (1969), Tversky (1977), Gick and Holyoak (1980, 1983), and Falkenhainer et al. (1989).

  3. Note that there could exist a formulation that is linear or monotonically increasing preferences. Kamakura and Srivastava (1986) state that the linear preferences would be best represented by infinite ideal points (Carroll 1972). Eiselt (2011) provides that, as an example, one problem associated with IPM is the existence of features such as price and gas consumption, which have an ideal point that is negative infinity (or zero).

  4. Since the distance and constant terms are both multiplied by n, setting s(x,x i ) = −α(xx i )2 + β means that the total utility level increases as the number of cases increases. For this reason, we refer to the relative effect rather than the absolute effect. If we set s(x,x i ; n) = −α(xx i )2/n + β/n, increasing the number of cases affects only the n β term.

  5. If the level of utility is a real number, it can be transformed into u i ∈ (0, 1] using a logistic function, for example. If a non-negative utility is required, the following computation will yield a value between 0 and 1: (current value - minimum value) / (maximum value - minimum value).

  6. For more details on jCOLIBRI, see the website http://gaia.fdi.ucm.es/research/colibri/people-users-apps/users-opinion(retrieved on December 22, 2016) and Díaz-Agudo et al. (2007).

  7. See Brooke (1996).

References

  • Aha, D.W., McSherry, D., & Yang, Q. (2005). Advances in conversational case-based reasoning, Special issue. The Knowledge Engineering Review, 20(3), 247–254. Cambridge Univ Press.

    Article  Google Scholar 

  • Azrieli, Y. (2011). Axioms for Euclidean preferences with a valence dimension. Journal of Mathematical Economics, 47(4–5), 545–553.

    Article  MathSciNet  MATH  Google Scholar 

  • Brooke, J. (1996). SUS-A quick and dirty usability scale. Usability Evaluation in Industry, 189(194), 4–7.

    Google Scholar 

  • Carroll, J.D. (1972). Individual differences and multidimensional analysis of preferential choice, Multidimensional scaling. Theory and applications in the behavioral sciences, Vol. 1. Theory. New York: Seminar Press.

    Google Scholar 

  • Changchien, S.W., & Lin, M.C. (2005). Design and implementation of a casebased reasoning system for marketing plans. Expert Systems with Applications, 28(1), 43–53.

  • Coombs, C.H. (1950). Psychological scaling without a unit of measurement. Psychological Review, 57(3), 145.

    Article  Google Scholar 

  • Díaz-Agudo, B., González-Calero, P.A., Recio-García, J.A., & Sánchez-Ruiz-Granados, A.A. (2007). Building CBR systems with jCOLIBRI. Science of Computer Programming, 69(1), 68–75.

    Article  MathSciNet  MATH  Google Scholar 

  • Dong, R., O’Mahony, M.P., Schaal, M., McCarthy, K., & Smyth, B. (2016). Combining similarity and sentiment in opinion mining for product recommendation. Journal of Intelligent Information Systems, 46(2), 285–312.

    Article  Google Scholar 

  • Eiselt, H.A. (2011). Equilibria in competitive location models. In: Foundations of location analysis (pp. 139-162). Springer, US.

  • Elrod, T. (1988). Choice map: inferring a product-market map from panel data. Marketing Science, 7(1), 21–40.

    Article  Google Scholar 

  • Elrod, T. (1991). Internal analysis of market structure: Recent developments and future prospects. Marketing Letters, 2(3), 253–266.

    Article  Google Scholar 

  • Falkenhainer, B., Fobus, K.D., & & Gentner, D. (1989). The structure -mapping engine: algorithmic example. Artificial Intelligence, 41, 1–63.

    Article  MATH  Google Scholar 

  • Freyne, J., & Smyth, B. (2009). Creating visualizations: a case-based reasoning perspective, Irish conference on artificial intelligence and cognitive science (pp. 82–91). Berlin Heidelberg: Springer.

    Google Scholar 

  • Gayer, G., Gilboa, I., & Lieberman, O. (2007). Rule-based and case-based reasoning in housing prices. The BE Journal of Theoretical Economics, 7(1), 1–37.

    Google Scholar 

  • Gick, M.L., & Holyoak, K.J. (1980). Analogical problem solving. Cognitive Psychology, 12(3), 306–355.

    Article  Google Scholar 

  • Gick, M.L., & Holyoak, K.J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15(1), 1–38.

    Article  Google Scholar 

  • Gilboa, I., Lieberman, O., & Schmeidler, D. (2006). Empirical similarity. The Review of Economics and Statistics, 88(3), 433–444.

    Article  Google Scholar 

  • Gilboa, I., & Schmeidler, D. (1995). Case-based decision theory. The Quarterly Journal of Economics, 605–639.

  • Gilboa, I., & Schmeidler, D. (2001). A theory of case-based decisions. Cambridge University Press.

  • Grosskopf, B., Sarin, R., & Watson, E. (2015). An experiment on case-based decision making. Theory and Decision, 79(4), 639–666.

    Article  MathSciNet  MATH  Google Scholar 

  • Guerdjikova, A. (2008). Case-based learning with different similarity functions. Games and Economic Behavior, 63(1), 107–132.

    Article  MathSciNet  MATH  Google Scholar 

  • Horsman, G., Laing, C., & Vickers, P. (2012). A case-based reasoning method for locating evidence during digital forensic device triage. Decision Support Systems, 61, 682–689.

    Google Scholar 

  • Hotelling, H. (1929). Stability in competition. Economic Journal, 39(153), 41–57.

    Article  Google Scholar 

  • Jahnke, H., Chwolka, A., & Simons, D. (2005). Coordinating service-sensitive demand and capacity by adaptive decision making: an application of case-based decision theory. Decision Sciences, 36(1), 1–32.

    Article  Google Scholar 

  • Kamakura, W.A., & Srivastava, R.K. (1986). An ideal-point probabilistic choice model for heterogeneous preferences. Marketing Science, 5(3), 199–218.

    Article  Google Scholar 

  • Kar, D., Kumar, A., Chakraborti, S., & Ravindran, B. (2013). iCaseViz: learning case similarities through interaction with a case base visualizer, International Conference on Case-Based Reasoning (pp. 203–217). Berlin Heidelberg: Springer.

    Google Scholar 

  • Krause, A. (2009). Learning and herding using case-based decisions with local interactions. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, 39(3), 662–669.

    Article  Google Scholar 

  • Kinjo, K., & Ebina, T. (2015). State-dependent choice model for TV programs with externality: analysis of viewing behavior. Journal of Media Economics, 28(1), 20–40.

    Article  Google Scholar 

  • Kinjo, K., & Sugawara, S. (2016). An empirical analysis for a case-based decision to watch Japanese TV dramas. The BE Journal of Theoretical Economics, 16(2), 679–709.

    Google Scholar 

  • Kolodner, J. (1984). Retrieval and organization strategies in conceptual memory: a computer model. Hillsdalem: Lawrence Erlbaum Associates.

    Google Scholar 

  • Kolodner, J. (2014). Case-based reasoning. Morgan Kaufmann.

  • Lee, J.K., Sudhir, K., & Steckel, J.H. (2002). A multiple ideal point model: Capturing multiple preference effects from within an ideal point framework. Journal of Marketing Research, 39(1), 73–86.

    Article  Google Scholar 

  • Li, H., & Sun, J. (2012). Case-based reasoning ensemble and business application: a computational approach from multiple case representations driven by randomness. Expert Systems with Applications, 39(3), 3298–3310.

    Article  Google Scholar 

  • Li, S., Sun, B., & Wilcox, R.T. (2005). Cross-selling sequentially ordered products: an application to consumer banking services. Journal of Marketing Research, 42(2), 233–239.

    Article  Google Scholar 

  • Lovallo, D., Clarke, C., & Camerer, C. (2012). Robust analogizing and the outside view: two empirical tests of case-based decision making. Strategic Management Journal, 33(5), 496–512.

    Article  Google Scholar 

  • Marling, C., Montani, S., Bichindaritz, I., & Funk, P. (2014). Synergistic case-based reasoning in medical domains. Expert Systems with Applications, 41(2), 249–259.

    Article  Google Scholar 

  • Mckenna, E., & Smyth, B. (2001). An interactive visualisation tool for case-based reasoners. Applied Intelligence, 14(1), 95–114.

    Article  MATH  Google Scholar 

  • Meyer, R., Erdem, T., Feinberg, F., Gilboa, I., Hutchinson, W., Krishna, A., Steven Lippman, S., Mela, C., Pazgal, A., Prelec, D., & Steckel, J. (1997). Dynamic influences on individual choice behavior. Marketing Letters, 8(3), 349–360.

    Article  Google Scholar 

  • Namee, B.M., & Delany, S.J. (2010). Cbtv: visualising case bases for similarity measure design and selection, International conference on case-based reasoning (pp. 213–227). Berlin Heidelberg: Springer.

    Google Scholar 

  • Ossadnik, W., Wilmsmann, D., & Niemann, B. (2013). Experimental evidence on case-based decision theory. Theory and Decision, 75(2), 211–232.

    Article  MathSciNet  MATH  Google Scholar 

  • Quine, W.V. (1969). Natural kinds, Essays in honor of Carl G. Hempel (pp. 5–23). Netherlands: Springer.

    Chapter  Google Scholar 

  • Recio-García, J.A., Díaz-Agudo, B., & González-Calero, P.A. (2009). Boosting the performance of CBR applications with jCOLIBRI. IEEE International Conference on Tools with Artificial Intelligence, 276–283.

  • Recio-García, J.A., González-Calero, P.A., & Díaz-Agudo, B. (2014). jcolibri2: a framework for building Case-based reasoning systems. Science of Computer Programming, 79, 126–145.

    Article  Google Scholar 

  • Rossi, P.E., Allenby, G.M., & McCulloch, R. (2005). Bayesian statistics and marketing. Wiley.

  • Schank, R.C. (1982). Dynamic memory: a theory of reminding and learning in computers and people. Canbridge University Press.

  • Simon, H.A. (1957). Models of Man. New York: Wiley.

    Google Scholar 

  • Smyth, B., Mullins, M., & McKenna, E. (2000). Picture perfect: visualisation techniques for case-based reasoning. In: ECAI, pp 65–72.

  • Sørmo, F., Cassens, J., & Aamodt, A. (2005). Explanation in case-based reasoning-perspectives and goals. Artificial Intelligence Review, 109(2), 109–143.

    Article  MATH  Google Scholar 

  • Tversky, A. (1977). Features of similarity. Psychological Review, 84(4), 327–352.

    Article  Google Scholar 

  • Urban, G.L. (1975). PERCEPTOR: a model for product positioning. Management Science, 21(8), 858–871.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keita Kinjo.

Appendices

Appendix A

1.1 A.1 Proof of Proposition 1

If u i = 1, we have the following:

$$\begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{n} s(x,x_{i}) &\,=\,&f[(x\,-\,x_{1} )^{2}] \,+\, f[(x-x_{2} )^{2}]\,+\,{\cdots} \,+\,f[(x\,-\,x_{n} )^{2}] \,+\,n\beta \,=\, f\left[ \sum\limits_{i=1}^{n} (x\,-\,x_{i} )^{2} \right] \,+\,n\beta \\ &\,=\,&f\left[ nx^{2} \,-\,2x \sum\limits_{i=1}^{n} x_{i} \,+\,\sum\limits_{i=1}^{n} {x_{i}^{2}} \right] \,+\,n\beta \,=\,n f\left[ x^{2} \,-\,\frac{2{\sum}_{i=1}^{n} x_{i} }{n} x\,+\, \frac{{\sum}_{i=1}^{n} {x_{i}^{2}} }{n} \right] \,+\,n\beta \\ &\,=\,&nf\left[ \left( x\,-\, \frac{{\sum}_{i=1}^{n} x_{i} }{n}\right)^{2} \right] \,+\,nf\left[ \frac{{\sum}_{i=1}^{n} {x_{i}^{2}} }{n}\,-\,\left( \frac{{\sum}_{i=1}^{n} x_{i} }{n}\right)^{2} \right] \,+\,n\beta \end{array} $$

The last expression describes an ideal point model with ideal points \({\sum }_{i=1}^{n} x_{i}/n\).

1.2 A.2 Proof of Lemma 1

Let a and b denote the elements of a pair of n-dimensional vectors R n . Suppose that the inner vector products are standard spaces. By the Cauchy-Schwartz inequality, we have

$$\left( \sum\limits_{i=1}^{n} a_{i} b_{i} \right)^{2} <\left( \sum\limits_{i=1}^{n} {a_{i}^{2}} \right) \left( \sum\limits_{i=1}^{n} {b_{i}^{2}} \right) $$

Setting a i = x i and b i = 1/n, we obtain

$$\left( \sum\limits_{i=1}^{n} x_{i} \frac{1}{n} \right)^{2} < \sum\limits_{i=1}^{n} {x_{i}^{2}} \sum\limits_{i=1}^{n} \frac{1}{n^{2}} \Leftrightarrow \frac{({\sum}_{i=1}^{n} x_{i} )^{2}}{n^{2}} <\frac{{\sum}_{i=1}^{n} {x_{i}^{2}} }{n} \Leftrightarrow \frac{{\sum}_{i=1}^{n} {x_{i}^{2}} }{n}-\frac{({\sum}_{i=1}^{n} x_{i} )^{2}}{n^{2}} >0, $$

which implies positive arguments of function f of the constant terms.

1.3 A.3 Proof of Proposition 2

We first consider the arguments of the similarity function f. When x n+1 is added, the arguments change in the following amounts:

$$\frac{{\sum}_{i=1}^{n} {x_{i}^{2}} +x_{n+1}^{2}}{n+1} -\left( \frac{{\sum}_{i=1}^{n} x_{i} +x_{n+1}}{n+1}\right)^{2} -\left[ \frac{{\sum}_{i=1}^{n} {x_{i}^{2}} }{n} -\left( \frac{{\sum}_{i=1}^{n} x_{i} }{n} \right)^{2} \right] $$

Setting \(a^{2}={\sum }_{i=1}^{n} {x_{i}^{2}} (\ge 0)\) and \(b={\sum }_{i=1}^{n} x_{i} (\ge 0)\), this equation can be rewritten as follows:

$$\begin{array}{@{}rcl@{}} \frac{a^{2} +x_{n+1}^{2}}{n+1} &-& \left( \frac{b+x_{n+1}}{n+1}\right)^{2} -\left[ \frac{a^{2}}{n}-\left( \frac{b}{n} \right)^{2} \right] = \frac{a^{2} +x_{n+1}^{2}}{n+1}-\frac{a^{2}}{n} + \left[ -\left( \frac{b+x_{n+1}}{n+1} \right)^{2}+\left( \frac{b}{n}\right)^{2} \right] \\ &=& \frac{n(a^{2} +x_{n+1}^{2}) -(n+1)a^{2}}{n(n+1)} +\frac{(n+1)^{2} b^{2} -n^{2} (b+x_{n+1})^{2}}{n^{2} (n+1)^{2} } \\ &=& \frac{-a^{2} n(1+n)+b^{2} (1+n)+b^{2} n-2bn^{2} x_{n+1}+n^{3} x_{n+1}^{2}}{n^{2} (n+1)^{2} } \\ &=& \frac{-a^{2} n(1+n)+b^{2} (1+n)+b^{2} n+n^{3} \left( x_{n+1}-b/n\right)^{2} -b^{2} n}{n^{2} (n+1)^{2} } \\ &= &\frac{n^{3} \left( x_{n+1}-b/n\right)^{2} -(1+n)(na^{2}-b^{2})}{n^{2} (n+1)^{2} }. \end{array} $$

The sufficient condition for positivity of this expression is

$$n^{3} \left( x_{n+1}-b/n\right)^{2} -(1+n)(na^{2}+b^{2})\ge 0. $$

Conversely, the sufficient condition for negativity is

$$n^{3} \left( x_{n+1}-b/n\right)^{2} -(1+n)(na^{2}+b^{2})\le 0. $$

The inequality can be rewritten as follows:

$$\begin{array}{@{}rcl@{}} &&n^{3} \left( x_{n+1}-b/n\right)^{2} -(1+n)(na^{2}+b^{2})\ge 0 \\ &\Leftrightarrow & \ x_{n+1} \le \frac{b}{n}-\sqrt{ \frac{(1+n)(na^{2}+b^{2}) }{n^{3}} } \ \text{or} \ x_{n+1} \ge \frac{b}{n} +\sqrt{ \frac{(1+n)(na^{2}+b^{2}) }{n^{3}} }. \end{array} $$

If the above condition is satisfied, the constant term is decreasing because the above expression is increasing. On the contrary, if the following condition is satisfied, the constant term is increasing since the above expression is decreasing:

$$\begin{array}{@{}rcl@{}} &&n^{3} \left( x_{n+1}-b/n\right)^{2} -(1+n)(na^{2}+b^{2})\le 0 \\ &\Leftrightarrow & \ \frac{b}{n}-\sqrt{ \frac{(1+n)(na^{2}+b^{2}) }{n^{3}} } \le x_{n+1} \le \frac{b}{n} +\sqrt{ \frac{(1+n)(na^{2}+b^{2}) }{n^{3}} } . \end{array} $$

Note that the lower bound may be negative or greater than one. Thus, we impose two additional conditions on the operators; namely, that min{…, 1} and max{…, 0} are restrained within the interior.

When x n+1 is added to the ideal point, we have

$$\frac{{\sum}_{i=1}^{n} x_{i} \,+\,x_{n+1}}{n+1} - \frac{{\sum}_{i=1}^{n} x_{i}}{n} \,=\,\frac{n{\sum}_{i=1}^{n} x_{i} +nx_{n+1} - (n+1) {\sum}_{i=1}^{n} x_{i} }{n(n+1) } \,=\, \frac{x_{n+1} \,-\,{\sum}_{i=1}^{n} x_{i} /n}{n+1} $$

If \({\sum }_{i=1}^{n} x_{i} /n\le x_{n+1}\) since \([ x_{n+1} -({\sum }_{i=1}^{n} x_{i} )/n] /(n+1)\ge 0\), the ideal point is increasing. On the other hand, if \(x_{n+1} \le {\sum }_{i=1}^{n} x_{i} /n\), we have \([ x_{n+1} -({\sum }_{i=1}^{n} x_{i} )/n] /(n+1)\le 0\), and the ideal point is decreasing.

1.4 A.4 Proof of Proposition 3

From the assumptions of the similarity function, we have

$$\begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{n}& &S(X,X_{i} )\!=\!\sum\limits_{i=1}^{n} \left[ \sum\limits_{j=1}^{m} f_{j} (x_{j}\!-\!x_{i,j} )^{2} \!+\!\sum\limits_{j=1}^{m} \beta_{j} \right] \!=\!\sum\limits_{j=1}^{m} \left[ \sum\limits_{i=1}^{n} f_{j} \left( x_{j}\!-\!x_{i,j} \right)^{2} \right] \!+\!n\left( \sum\limits_{j=1}^{m} \beta_{j} \right) \\ &\!=\!& \sum\limits_{j=1}^{m} \left\{ n f_{j} \left[ \left( x_{j}\!-\!\frac{\sum_(i\!=\!1)^{n} x_{j,i} ) }{n}\right)^{2} \right] \!+\!n f_{j} \left[ \frac{\sum_{i=1}^{n} x_{j,i}^{2} }{n} \!-\!\left( \frac{\sum_{i=1}^{n} x_{j,i} }{n}\right)^{2} \right]\right\} \!+\!n\sum\limits_{j=1}^{m} \beta_{j} \\ &\!=\!& n \sum\limits_{j=1}^{m} f_{j} \left[ \left( x_{j}\!-\!\frac{\sum_(i\!=\!1)^{n} x_{j,i} ) }{n}\right)^{2} \right] \!+\!n \sum\limits_{j=1}^{m} f_{j} \left[ \frac{\sum_{i=1}^{n} x_{j,i}^{2} }{n} \!-\!\left( \frac{\sum_{i=1}^{n} x_{j,i} }{n}\right)^{2} \right] \!+\!n\sum\limits_{j=1}^{m} \beta_{j} \end{array} $$

which expresses an ideal point model with an m-dimensional ideal points \(([{\sum }_{i=1}^{n} x_{1,j} u_{i}] /[n {\sum }_{i=1}^{n} u_{i}], \ldots , [{\sum }_{i=1}^{n} x_{m,j} u_{i}] /[n {\sum }_{i=1}^{n} u_{i}] )\).

1.5 A. 5 Proof of Proposition 4

The following calculation gives

$$\begin{array}{@{}rcl@{}} && \sum\limits_{i=1}^{n} S(X,X_{i} ) u_{i} \!= \! \sum\limits_{i=1}^{n} \left\{ \left( \sum\limits_{j=1}^{m} f_{j} [(x_{j} \!- \!x_{i,j} )^{2} ] \!+ \!\sum\limits_{j=1}^{m} \beta_{j} \right) u_{i} \right\} \!= \! \sum\limits_{j=1}^{m} \left\{ \sum\limits_{i=1}^{n} f_{j} [(x_{j} \!- \!x_{i,j} )^{2} ] u_{i} \right\} \!+ \!\sum\limits_{i=1}^{n} u_{i} \sum\limits_{j=1}^{m} \beta_{j} \\ & \!=& \!n \sum\limits_{j=1}^{m}\left\{ f_{j} \left[ \left( x_{j} \!- \!\frac{{\sum}_{i=1}^{n} x_{j,i} u_{i}}{n{\sum}_{i=1}^{n} u_{i} }\right)^{2} \right] \!+ \! f_{j} \left[ \frac{{\sum}_{i=1}^{n} x_{j,i}^{2} u_{i}}{n} \!- \!\left( \frac{{\sum}_{i=1}^{n}(x_{j,i} ) u_{i}}{n{\sum}_{i=1}^{n} u_{i} }\right)^{2} \right]\right\} \!+ \!\sum\limits_{i=1}^{n} u_{i} \sum\limits_{j=1}^{m} \beta_{j} \\ & \!= \!&n\sum\limits_{j=1}^{m} f_{j} \left[ x_{j} \!- \!\left( \frac{{\sum}_{i=1}^{n} (x_{j,i} ) u_{i}}{n{\sum}_{i=1}^{n} u_{i}} \right)^{2} \right] \!+ \!n \sum\limits_{j=1}^{m} f_{j} \left[ \frac{{\sum}_{i=1}^{n} x_{j,i}^{2} u_{i}}{n} \!- \!\left( \frac{{\sum}_{i=1}^{n} (x_{j,i} ) u_{i}}{n{\sum}_{i=1}^{n} u_{i} }\right)^{2}\right] \!+ \!\sum\limits_{i=1}^{n} u_{i} \sum\limits_{j=1}^{m} \beta_{j} \end{array} $$

One can verify that the above equation corresponds to the ideal point model based on m-dimensional vectors of ideal points \(([{\sum }_{i=1}^{n} x_{1,i} u_{i} ]/[n {\sum }_{i=1}^{n} u_{i}] ,\ldots , [{\sum }_{i=1}^{n} x_{m,i} u_{i}]/[n{\sum }_{i=1}^{n} u_{i}])\).

Appendix B

1.1 B.1 Discussion on the similarity function

One can calculate the weight of the similarity as either positive or negative. To do so, let K + and K denote a set of cases with k h,j ≥ 0 and k h,j < 0, respectively, among m attributes. Classifying two cases, we have

$$S_{h} (X,X_{i} )=\sum\limits_{j=1}^{m} k_{h,j} s_{j} (x_{j},x_{i,j} ) I_{K^{+}} (k)+\sum\limits_{j=1}^{m} k_{h,j} s_{j} (x_{j},x_{i,j} ) I_{K^{-}} (k), $$

where \(I_{K^{+}} (k)\) and \(I_{K^{-}} (k)\) indicate functions of positive and negative cases, respectively. Then, since a part of k h,j s j (x j ,x i,j ) > 0 of the first term \({\sum }_{j=1}^{m} k_{h,j} s_{j} (x_{j},x_{i,j} ) I_{K^{+}} (k)\) is considered as one kind of similarity function, k h,j ≥ 0 can be transformed into an ideal point. However, for the second term \({\sum }_{j=1}^{m} k_{h,j} s_{j} (x_{j},x_{i,j} ) I_{K^{-}} (k)\), setting \(-{\sum }_{j=1}^{m} (-k_{h,j})s_{j} (x_{j},x_{i,j} ) I_{K^{-}} (k)\), since a part of (−k h,j )s j (x j ,x i,j ) > 0 becomes one kind of similarity function, and k h,j < 0 can also be transformed into another ideal point.

1.2 B.2 Our estimation method

Based on Rossi et al. (2005), our parameters are estimated using the following method (see Rossi et al. (2005) for details). Given data y, the simultaneous posterior distribution is given by

$$ p(k_{h},{\Delta}, V_{k}| y)\propto p({\Delta},V_{k}){\Pi}_{h=1}^{H} \{ p(k_{h} |{\Delta}, V_{k},z_{h} )p(y_{h} |k_{h},x_{h})\}. $$
(3)

Thus, conditional posterior distribution p(k h |Δ,V k ,z h ,y h ,x h ) is

$$p(k_{h}| {\Delta}, V_{k},z_{h},y_{h},x_{h})\propto p(y_{k}| k_{h},x_{h}) p(k_{h} |{\Delta}, V_{k},z_{h} ). $$

However, since conjugate distribution cannot be explicitly calculated, sampling according to a random walk algorithm is conducted. Let \({k^{0}_{h}}\) denote an initial value. Then, the value is chosen as \({k_{h}^{t}}=k_{h}^{t-1}+\varepsilon ^{\prime }\), where t > 0 is an index and ε follows a normal distribution. If a sampling number chosen by a uniform random number U D [0,1] is below an acceptance rate (4), the next \({k_{h}^{t}}\) is determined.

$$ \alpha (k_{h}^{t-1}; {k_{h}^{t}})=\min \{ p({k_{h}^{t}} |{\Delta} ,V_{k},z_{h},y_{h},x_{h})/ p(k_{h}^{t-1}| {\Delta} , V_{k}, z_{h},y_{h},x_{h} ),1 \} $$
(4)

A conditional distribution on V k can be sampled as follows

$$\begin{array}{@{}rcl@{}} vec ({\Delta}) &&\sim N_{k}q (vec(\tilde{D} )| V_{k} \otimes (Z^{\prime} Z+A_{d} )^{-1} ) \\ \tilde{D} &=& (Z^{\prime}+A_{d})^{-1} (Z^{\prime} Z((Z^{\prime} Z)^{-1} Z^{\prime}K)+A_{d})^{-1}, \end{array} $$

where conditional distribution on Δ is a vector-valued function of vec, Z is a matrix combined with attributes z h , and K is a matrix with all k h . A d is a variance of a prior distribution assumed to be a normal distribution.

Conditional distribution on Δ can be sampled, and follows the next equations:

$$\begin{array}{@{}rcl@{}} V_{k} &&\sim IW(f_{0} +H, V_{0}+V^{\prime}) \\ V^{\prime} &=&\sum\limits_{h=1}^{H} (k_{h}-{\Delta} 'z_{h} ) (k_{h}-{\Delta} 'z_{h})', \end{array} $$

where f 0 and V 0 are the initial parameters of inverse Wishart distribution.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kinjo, K., Ebina, T. Case-based decision model matches ideal point model:. J Intell Inf Syst 50, 341–362 (2018). https://doi.org/10.1007/s10844-017-0463-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-017-0463-6

Keywords

Navigation