single-jc.php

JACIII Vol.21 No.2 pp. 221-227
doi: 10.20965/jaciii.2017.p0221
(2017)

Paper:

Battlefield Agent Decision-Making Based on Markov Decision Process

Jia Zhang*,**, Xiang Wang*,**, Fang Deng*,**, Bin Xin*,**, and Wenjie Chen*,**

*School of Automation, Beijing Institute of Technology
Beijing 100081, China

**Key Laboratory of Complex System Intelligent Control and Decision
Beijing 100081, China

Received:
June 2, 2016
Accepted:
October 31, 2016
Online released:
March 15, 2017
Published:
March 20, 2017
Keywords:
decision support, markov decision process, softmax regression, random forest
Abstract
Battlefield decision-making is an important part of modern information warfare. It can analyze and integrate battlefield information, reduce operators’ work and assist them to make decisions quickly in complex battlefield environment. The paper presents a dynamic battlefield decision-making method based on Markov Decision Processes (MDP). By this method, operators can get decision support quickly in the case of incomplete information. In order to improve the credibility of decisions, dynamic adaptability and intelligence, softmax regression and random forest are introduced to improve the MDP model. Simulations show that the method is intuitive and practical, and has remarkable advantages in solving the dynamic decision problems under incomplete information.
Cite this article as:
J. Zhang, X. Wang, F. Deng, B. Xin, and W. Chen, “Battlefield Agent Decision-Making Based on Markov Decision Process,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.2, pp. 221-227, 2017.
Data files:
References
  1. [1] Z. M. Han, “On Operational Decision-making Behavior,” Beijing: National Defense University Press, 2005.
  2. [2] Q. Z. Wang and F. Bai, “Simulation Study on Tactical Decision Support System for Tank Element,” Fire Control and Command Control, Vol.35, No.5, pp. 58-61, 2010.
  3. [3] L. H. Dou, “Intelligence Decision Support System for The Optimal Design of Tank’s Fire Control System,” Beijing: Beijing Institute of Technology, 2002.
  4. [4] P. Louvieris, A. Gregoriades, and W. Garn, “Assessing Critical Success Factors for Military Decision Support,” Expert Systems with Applications, Vol.37, No.12, pp. 8229-8241, 2010.
  5. [5] H. B. Yang, J. L. Li, and Z. Q. Hong, “Study on Aided Decision-making System for Operational Command of SLM,” Fire Control and Command Control, Vol.40, No.1, pp. 122-125, 2015.
  6. [6] Q. Qu, J. Peng, and D. Y. Li, “Architecture of an Operational Decision-making Support System Based on Multi-agent,” Computer System and Applications, Vol.19, No.4, pp. 1-4, 2010.
  7. [7] Y. Y. Jia, S. M. Kakade, and N. Shimkin, “Markov Decision Processes with Arbitrary Reward Processes,” Mathematics of Operations Research, Vol.34, No.3, pp. 737-757, 2009.
  8. [8] A. Xu, Y. X. Kou, L. Yu, et al., “Stealthy Engagement Maneuvering Strategy with Q-learning Based on RBFNN for Air Vehicles,” Systems Engineering and Electronics, Vol.34, No.1, pp. 97-101, 2012.
  9. [9] M. Yang, “Research on Key Technologies of Military Analytic Simulation for High Level Decision-making,” National University of Defense Technology, 2014.
  10. [10] B. S. Wang and T. Wang, “Modeling for Joint Operation Requirements Based on Task-capability Matching,” Modern Electronic Engineering, Vol.2, No.3, pp. 5-9, 2011.
  11. [11] R. G. Wang, C. J. Hua, K. Q. Tang, et al., “Design and Implementation of Simulation System for Tank Firing Training,” Fire Control and Command Control, Vol.33, No.9, pp. 112-114, 2008.
  12. [12] K. Liu, “Applied Markov Decision Processes,” Beijing: Tsinghua University Press, 2004.
  13. [13] Y. J. Lei, B. S. Wang, and Y. Wang, “Techniques for Battlefield Situation Assessment Based on Intuitionistic Fuzzy Decision,” Acta Electronica Sinica, Vol.34, No.12, pp. 2175-2179, 2006.
  14. [14] P. Yang, Y. M. Bi, and W. D. Liu, “Decision-making Model of Tactics Maneuver Agent Based on Fuzzy Markov Decision Theory,” Systems Engineering and Electronics, Vol.30, No.3, pp. 511-514, 2008.
  15. [15] E. Even-Dar, S. M. Kakade, and Y. Mansour, “Online Markov Decision Processes,” Mathematics of Operations Research, Vol.34, No.3, pp. 726-736, 2009.
  16. [16] X. Gao, Y. W. Fang, and Y. L. Wu, “Fuzzy Q Learning Algorithm for Dual-aircraft Path Planning to Cooperatively Detect Targets by Passive Radars,” J. of Systems Engineering and Electronics, No.5, pp. 800-810, 2013.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024