ABSTRACT
The transportation field requires a large number of simulation scenarios for testing. At present, there is relatively little research on the generation of extreme scenarios. In this paper, we give the definition of extreme scenarios, which are prone to problems, and divide them into two categories: the extreme scenarios based on primitive value and the extreme scenarios based on primitive coupling. This paper focuses on the second which considers the coupling effect of different primitives in the scenarios, using the methods of adversarial attack: FGSM, FGSM-target, BIM, ILCM, PGD and strategically-timed attack. Using vehicle agent for test, the first five methods prove the feasibility and effectiveness of extreme scenario generation, and the sixth method simplifies the generation process.
- Kalra N, Paddock S M (2016). Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability? Transportation Research Part A: Policy and Practice, 94, 182-193.Google ScholarCross Ref
- Christensen A, Cunningham A, Engelman J, (2015). Key considerations in the development of driving automation systems. 24th enhanced safety vehicles conference. Gothenburg, Sweden.Google Scholar
- Benmimoun M (2017). Effective evaluation of automated driving systems. SAE Technical Paper.Google Scholar
- Urmson C, Anhalt J, Bagnell D, (2008). Autonomous driving in urban environments: Boss and the urban challenge. Journal of Field Robotics, 25(8), 425-466.Google ScholarCross Ref
- Saust F, Wille J M, Lichte B, (2011). Autonomous vehicle guidance on braunschweig's inner ring road within the stadtpilot project. 2011 IEEE Intelligent Vehicles Symposium (IV). IEEE, 169-174.Google ScholarCross Ref
- Ardelt M, Coester C, Kaempchen N (2012). Highly automated driving on freeways in real traffic using a probabilistic framework. IEEE Transactions on Intelligent Transportation Systems, 13(4), 1576-1585.Google ScholarDigital Library
- Anderson J M, Nidhi K, Stanley K D, (2014). Autonomous Vehicle Technology: A Guide for Policymakers[M]. New York: Rand Corporation.Google Scholar
- Masuda S (2017). Software testing design techniques used in automated vehicle simulations. 2017 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW). IEEE, 300-303.Google ScholarCross Ref
- Browne C B, Powley E, Whitehouse D, (2012). A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games, 4(1), 1-43.Google ScholarCross Ref
- Mnih V, Kavukcuoglu K, Silver D, (2015). Human-level control through deep reinforcement learning. nature, 518(7540), 529-533.Google Scholar
- Van Hasselt H, Guez A, Silver D (2016). Deep reinforcement learning with double q-learning. Proceedings of the AAAI Conference on Artificial Intelligence: volume 30.Google ScholarCross Ref
- Wang Z, Schaul T, Hessel M, (2016). Dueling network architectures for deep reinforcement learning. International conference on machine learning. PMLR, 1995-2003.Google Scholar
- Sorokin I, Seleznev A, Pavlov M, (2015). Deep attention recurrent Q-network. arXiv preprint arXiv:1512.01693.Google Scholar
- Goodfellow I J, Shlens J, Szegedy C (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.Google Scholar
- Kurakin A, Goodfellow I, Bengio S (2016). Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236.Google Scholar
- Kurakin A, Goodfellow I J, Bengio S (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.Google Scholar
- Madry A, Makelov A, Schmidt L, (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.Google Scholar
- Lin Y C, Hong Z W, Liao Y H, (2017). Tactics of adversarial attack on deep reinforcement learning agents. arXiv preprint arXiv:1703.06748.Google ScholarDigital Library
Recommendations
Adversarial Attack against Modeling Attack on PUFs
DAC '19: Proceedings of the 56th Annual Design Automation Conference 2019The Physical Unclonable Function (PUF) has been proposed for the identification and authentication of devices and cryptographic key generation. A strong PUF provides an extremely large number of device-specific challenge-response pairs (CRP) which can ...
PlAA: Pixel-level Adversarial Attack on Attention for Deep Neural Network
Artificial Neural Networks and Machine Learning – ICANN 2022AbstractDeep Neural Networks (DNNs) have demonstrated excellent performance in many fields. However, existing studies have shown that deep neural networks are very susceptible to well-designed adversarial samples. Adversarial samples cause the system to ...
AdvDoor: adversarial backdoor attack of deep learning system
ISSTA 2021: Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and AnalysisDeep Learning (DL) system has been widely used in many critical applications, such as autonomous vehicles and unmanned aerial vehicles. However, their security is threatened by backdoor attack, which is achieved by adding artificial patterns on specific ...
Comments