Skip to main content

An Automatic Modeling Method of Kansei Evaluation from Product Data Using a CNN Model Expressing the Relationship Between Impressions and Physical Features

  • Conference paper
  • First Online:
HCI International 2019 - Posters (HCII 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1032))

Included in the following conference series:

Abstract

In the field of Kansei engineering, the approach is often taken of Kansei evaluation modeling expressing the relationships between physical features and impression of an object. However, in the conventional modeling method, personnel and time costs are very high because multiple experiments and analyses are needed to high precision modeling. In contrast, study using machine learning has been conducted as a method of modeling the relationship between physical features and impressions of products. However, no studies have been reported that considering how the nature of an impression that there is evaluation vary from person to person. In this study, we work on automatically Kansei evaluation modeling using images and review-text data of products existing on the web. A convolutional neural network (CNN) is used for modeling, and variation in the impressions of each product are taken into consideration when learning. In the proposed method, we performed the following: (1) Extraction of the main impressions of target domain and calculation of values that express the strength of each impression from review-text data through text mining based on the previous study [1], (2) creation of a product image data set that uses the distribution of products’ impression scores as a training label and (3) construction of the CNN model using the created data set. We applied proposed method to wristwatches as the target domain and verified the estimation accuracy of constructed CNN model. As a result, a high positive correlation was confirmed between estimated impression score and impression scores that were calculated from review-text data. In addition, since present results exceeded the estimation accuracy of CNN model hasn’t learned distribution of impression scores, learning variations in the evaluation of peoples’ impressions were shown to be effective for improving estimation accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hashimoto, S., Yamada, A., Nagata, N.: A quantification method of composite impression of products by externalized evaluation words of the appraisal dictionary with review text data. Int. J. Affect. Eng. 18(2), 59–65 (2019)

    Article  Google Scholar 

  2. Toyoda, N., et al.: Objective evaluation about texture for cosmetic ingredients by direct shear testing of powder bed. J. Soc. Pow. Technol. 52, 694–700 (2015). https://doi.org/10.4164/sptj.52.694

    Article  Google Scholar 

  3. Chen, C.H., Khoo, L.P., Chen, K., Pang, J.H., Huang, Y.: Consumer-oriented product form creation via Kansei engineering. In: Proceedings of the International Symposium for Emotion and Sensibility e Emotion Research in Practice, pp. 184–191 (2008)

    Google Scholar 

  4. Tobitani, K., Matsumoto, T., Tani, Y., Fujii, H., Nagata, N.: Modeling of the relation between impression and physical characteristics on representation of skin surface quality. J. Inst. Image Inf. Telev. Eng. 71, 259–268 (2017)

    Google Scholar 

  5. Lu, X., Lin, Z., Jin, H., Yang, J., Wand, J.Z.: Rapid: rating pictorial aesthetics using deep learning. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 457–466. ACM (2014)

    Google Scholar 

  6. Sano, M.: “Nihongo apureizaru hyooka hyoogen jisho -taido hyooka hen-” no koochiku~hyooka no tayousei wo toraeru tameno gengoshigen no kaihatsu~. In: Proceedings of the Annual Meeting of the Association for Natural Language Processing, vol. 17, p. ROMBUNNO.E1-2 (2011)

    Google Scholar 

  7. Sano, M.: Japanese Dictionary of Appraisal -attitude-. Gengo Shigen Kyokai (2010)

    Google Scholar 

  8. Teh, Y.W., Jordan, M.I., Beal, M.J., Blei, D.M.: Sharing clusters among related groups: hierarchical Dirichlet processes. In: Advances in Neural Information Processing Systems, pp. 1385–1392 (2005)

    Google Scholar 

  9. Blei, D.M., Laffety, J.D.: Topic models. In: Text Mining: Classification, Clustering, and Applications, vol. 10, p. 34 (2009)

    Google Scholar 

  10. Chatfield, K., Simonyan, K., Vedaldi, A., Zisserman, A.: Return of the devil in the details: delving deep into convolutional nets. CoRR, abs/1405.3531 (2014)

    Google Scholar 

  11. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167 (2015)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. The IEEE International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hidemichi Suzuki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suzuki, H., Yamada, A., Tobitani, K., Hashimoto, S., Nagata, N. (2019). An Automatic Modeling Method of Kansei Evaluation from Product Data Using a CNN Model Expressing the Relationship Between Impressions and Physical Features. In: Stephanidis, C. (eds) HCI International 2019 - Posters. HCII 2019. Communications in Computer and Information Science, vol 1032. Springer, Cham. https://doi.org/10.1007/978-3-030-23522-2_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23522-2_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23521-5

  • Online ISBN: 978-3-030-23522-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics