Abstract
In vehicular networks, parked vehicles can join vehicular communication as static nodes, but encouraging people to share their wireless devices during parking still suffer from user selfishness, unexpected parking locations, and delayed data transmission. In this study, we propose a centralized service provider (SP) system to distribute mobile parking incentives for vehicle users, as some parking tasks via a smartphone app. With the support of SP system, mobile users can accept parking tasks and park their vehicles on proposed locations for rewards, which are beneficial to vehicular communication. Meanwhile, a dynamic pricing algorithm based on reinforcement learning (RL) for SP system that aiming to maximize the connectivity of all road segments and minimize the total cost of the SP is considered. Since SP system has a slow learning speed in the presence of a large state space due to the curse of high dimensionality, we develop a deep Q-network (DQN) based payment strategy that exploits a deep neural network to compress the learning state space and estimate the Q-value for each payment value. Extensive simulations with realistic parking deployments demonstrate that the proposed scheme accelerates the learning speed to obtain an optimal payment policy and increases the completion rate of tasks, and the final parking results effectively enhance vehicular communication.
Similar content being viewed by others
References
(2012) Roadside parking space. https://data.beijing.gov.cn/zyml/ajg/sgaj/4810.htm
Balen, J., Paridel, K., Martinovic, G., Berbers, Y.: PVCM: Assisting multi-hop communication in vehicular networks using parked vehicles. 2012 IV International Congress on Ultra Modern Telecommunications and Control Systems. IEEE, pp 119–122 (2012)
Cats, O., Chen, Z., Nissan, A.: Survey methodology for measuring parking occupancy: impacts of an on-street parking pricing scheme in an urban center. Transport Policy 47, 55–63 (2016)
Cesana, M., Fratta, L., Gerla, M., Giordano, E., Pau, G.: C-vet the ucla campus vehicular testbed: Integration of vanet and mesh networks. 2010 European Wireless Conference (EW) pp 689–695 (2010)
Cheng, N., Lu, N., Zhang, N., Zhang, X., Shen, X., Mark, J.W.: Opportunistic wifi offloading in vehicular environment: a game-theory approach. IEEE Trans Intell Transport Syst 17, 1944–1955 (2016)
Eckhoff, D., Somme,r C., German R, et al.: Cooperative awareness at low vehicle densities: How parked cars can help see through buildings. 2011 IEEE Global Telecommunications Conference-GLOBECOM 2011. IEEE, pp 1–6 (2011)
Geng, Y., Cassandras, C.G.: New “smart parking” system based on resource allocation and reservations. IEEE Trans Intell Transport Syst 14(3), 1129–1139 (2013)
Harri, J., Filali, F., Bonnet, C., Fiore, M.: VanetMobiSim: generating realistic mobility patterns for VANETs. Proceedings of the 3rd international workshop on Vehicular ad hoc networks. pp 96–97 (2006)
Hou, X., Yong, L., Min, C., Di, W., Jin, D., Sheng, C.: Vehicular fog computing: a viewpoint of vehicles as the infrastructures. IEEE Trans Veh Technol 65(6), 3860–3873 (2016)
Huang, X., Yu, R., Liu, J., Shu, L.: Parked vehicle edge computing: Exploiting opportunistic resources for distributed mobile applications. IEEE Access 6, 66649–66663 (2018)
Kotb, A.O., Shen, Y.C., Xu, Z., Yi, H.: Iparker-a new smart car-parking system based on dynamic resource allocation and pricing. IEEE Trans Intell Transport Syst 17(9), 2637–2647 (2016)
Liang, L., Ye, H., Li, G.Y.: Towards intelligent vehicular networks: a machine learning framework. IEEE Internet Things J 6(1), 124–135 (2019)
Lin, T., Rivano, L.: A survey of smart parking solutions. IEEE Trans Intell Transport Syst 99, 1–25 (2017)
Liu, N., Liu, M., Lou, W., Chen, G., Cao, J.: Pva in vanets: Stopped cars are not silent. 2011 Proceedings IEEE INFOCOM, pp 431–435 (2011)
Liu, N., Liu, M., Chen, G., Cao, J.: He sharing at roadside: vehicular content distribution using parked vehicles. 2012 Proceedings IEEE INFOCOM, pp 2641–2645 (2012)
Lu, R., Hong, S.H., Zhang, X.: A dynamic pricing demand response algorithm for smart grid: reinforcement learning approach. Appl Energy 220, 220–230 (2018)
Mackowski, D., Yun, B., Ouyang, Y.: Parking space management via dynamic performance-based pricing. Transport Res Part C Emerging Technol 59, 66–91 (2015)
Nshimiyimana, A., Agrawal, D., Arif, W.: Comprehensive survey of v2v communication for 4g mobile and wireless technology. 2016 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET) pp 1722–1726 (2016)
Pham, T.N., Tsai, M.F., Nguyen, D.B., Dow, C.R., Deng, D.J.: A cloud-based smart-parking system based on internet-of-things technologies. IEEE Access 3(C), 1581–1591 (2015)
Rehman, O.M.H., Ould-Khaoua, M., Bourdoucen, H.: An adaptive relay nodes selection scheme for multi-hop broadcast in vanets. Comput Commun 87, 76–90 (2016)
Saroiu, S., Gummadi, P. K., Gribble, S. D.: Measurement study of peer-to-peer file sharing systems. Multimedia computing and networking 2002. International society for optics and photonics, 4673, pp 156–170 (2001)
Shah, S.S., Ali, M., Malik, A., Khattak, M., Ravana, S.D.: Vfog: A vehicle-assisted computing framework for delay-sensitive applications in smart cities. IEEE Access 7, 1–1 (2019). https://doi.org/10.1109/ACCESS.2019.2903302
Shoup, D.C.: Cruising for parking. University of California Transportation Center Working Papers 13(6):479–486 (2007)
Sun, Y., Peng, M., Mao, S.: Deep reinforcement learning based mode selection and resource management for green fog radio access networks. IEEE Internet Things J 6(2), 1–1 (2018)
Volodymyr, M., Koray, K., David, S., Rusu, A.A., Joel, V., Bellemare, M.G., Alex, G., Martin, R., Fidjeland, A.K., Georg, O.: Human-level control through deep reinforcement learning. Nature 518(7540), 529 (2015)
Wang, P., Di, B., Zhang, H., Bian, K., Song, L.: Cellular v2x communications in unlicensed spectrum: Harmonious coexistence with vanet in 5g systems. IEEE Trans Wirel Commun 17, 5212–5224 (2018)
Xu, Q., Zhou, S., Bo, H., Fang, D., Xu, Z., Gan, X.: Analytical model with a novel selfishness division of mobile nodes to participate cooperation. Peer Peer Netw Appl 9(4), 712–720 (2016)
Yang, J.: Model construction of “earning money by taking photos”. IOP Conference Series: materials science and engineering. IOP Publishing, 322(5):052045 (2018)
Zhang, Y., Wang, C. Y., Wei, H .Y.: Parking reservation auction for parked vehicle assistance in vehicular fog computing. IEEE Transactions on Vehicular Technology, 68(4):3126–3139 (2019)
Zhan, Y., Xia, Y., Zhang, J., et al.: Crowdsensing game with demand uncertainties: a deep reinforcement learning approach. arXiv preprint arXiv:1901.00733 (2018)
Zhou, S., Hui, Y., Song, G.: D2d-based content delivery with parked vehicles in vehicular social networks. IEEE Wirel Commun 23(4), 90–95 (2016)
Zhou, S., Xu, Q., Hui, Y., Mi, W., Song, G.: A game theoretic approach to parked vehicle assisted content delivery in vehicular ad hoc networks. IEEE Trans Veh Technol 99, 1–1 (2017)
Acknowledgements
This work was supported in part by NSFC Grants No. 61572113 and No. 61877009; China Postdoctoral Science Foundation No. 2014M562308; the Science and Technology Achievements Transformation Demonstration Project of Sichuan Province of China No. 2018CC0094; and the Fundamental Research Funds for the Central Universities No. ZYGX2015J155, ZYGX2016J084, ZYGX2016J195, ZYGX2019J075.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yang, M., Liu, N., Zuo, L. et al. Mobile parking incentives for vehicular networks: a deep reinforcement learning approach. CCF Trans. Pervasive Comp. Interact. 2, 261–274 (2020). https://doi.org/10.1007/s42486-020-00032-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42486-020-00032-4