Skip to main content
Log in

A Visual Predictive Control Framework for Robust and Constrained Multi-Agent Formation Control

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

In this study, the problem of leader-follower position-based formation control is considered. Each agent in the multi-agent network is equipped with a perspective camera (assumed to be pinhole-type) and is servoed visually. A depth-based visual predictive controller is proposed. The framework optimizes the planned trajectory with a prediction horizon, while taking image- and physical-space constraints into account. Furthermore, the presented control scheme provides robustness against the camera occlusion, modeling errors and uncertainties. The performance of the proposed tracking algorithm is validated via numerous simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

None

Code Availability

None

References

  1. Oh, K.K., Park, M.C., Ahn, H.S.: A survey of multi-agent formation control. Automatica 53, 424–440 (2015)

    Article  MathSciNet  Google Scholar 

  2. Zhang, Y., Mehrjerdi, H.: A survey on multiple unmanned vehicles formation control and coordination: normal and fault situations. In: Proc. 2013 International Conference on Unmanned Aircraft Systems (ICUAS), Atlanta, GA, USA, pp. 1087–1096 (2013)

  3. Hu, J., Xu, J., Xie, L.: Cooperative search and exploration in robotic networks. Unmanned Systems 1(01), 121–142 (2013)

    Article  Google Scholar 

  4. Panagou, D., Kyriakopoulos, K.J.: Cooperative formation control of underactuated marine vehicles for target surveillance under sensing and communication constraints. In: Proc. 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, pp. 1871–1876 (2013)

  5. der Walle, D.V., Fidan, B., Sutton, A., Yu, C., Anderson, B.D.O.: Non-hierarchical Uav formation control for surveillance tasks. In: Proc. IEEE 2008 American Control Conference (ACC), Seattle, WA, USA, pp. 777–782 (2008)

  6. Kim, J.H., Kwon, J.W., Seo, J.: Multi-uav-based stereo vision system without gps for ground obstacle mapping to assist path planning of ugv. Electron. Lett. 50(20), 1431–1432 (2014)

    Article  Google Scholar 

  7. Hutchinson, S., Hager, G.D., Corke, P.I.: A tutorial on visual servo control. IEEE Trans. Robot. Autom. 12(5), 651–670 (1996)

    Article  Google Scholar 

  8. Kragic, D., Christensen, H.I., et al.: Survey on visual servoing for manipulation. Computational Vision and Active Perception Laboratory, Fiskartorpsv 15, 2002 (2002)

    Google Scholar 

  9. Cho, H.: Opto-mechatronic Systems handbook: Techniques and Applications. CRC Press, Massachusetts, US (2002)

  10. Janabi-Sharifi, F., Deng, L., Wilson, W.J.: Comparison of basic visual servoing methods. IEEE/ASME Transactions on Mechatronics 16(5), 967–983 (2010)

    Article  Google Scholar 

  11. Chen, X., Jia, Y.: Adaptive leader-follower formation control of non-holonomic mobile robots using active vision. IET Control Theory & Applications 9(8), 1302–1311 (2015)

    Article  MathSciNet  Google Scholar 

  12. Wang, H., Guo, D., Liang, X., Chen, W., Hu, G., Leang, K.K.: Adaptive vision-based leader–follower formation control of mobile robots. IEEE Trans. Ind. Electron. 64(4), 2893–2902 (2016)

    Article  Google Scholar 

  13. Liang, X., Wang, H., Liu, Y.H., Chen, W., Liu, T.: Formation control of nonholonomic mobile robots without position and velocity measurements. IEEE Trans. Robot. 34(2), 434–446 (2017)

    Article  Google Scholar 

  14. Chueh, M., Yeung, Y.L.W.A., Lei, K.P.C., Joshi, S.S.: Following controller for autonomous mobile robots using behavioral cues. IEEE Trans. Ind. Electron. 55(8), 3124–3132 (2008)

    Article  Google Scholar 

  15. Fathian, K., Doucette, E., Curtis, J.W., Gans, N.R.: Vision-based distributed formation control of unmanned aerial vehicles. arXiv:1809.00096 (2018)

  16. Das, A.K., Fierro, R., Kumar, V., Ostrowski, J.P., Spletzer, J., Taylor, C.J.: A vision-based formation control framework. IEEE Trans. Robot. Autom. 18(5), 813–825 (2002)

    Article  Google Scholar 

  17. Mariottini, G.L., Morbidi, F., Prattichizzo, D., Pappas, G.J., Daniilidis, K.: Leader-follower formations: uncalibrated vision-based localization and control. In: Proc. 2007 IEEE International Conference on Robotics and Automation, Roma, Italy, pp. 2403–2408 (2007)

  18. Mariottini, G.L., Morbidi, F., Prattichizzo, D., Valk, N.V., Michael, N., Pappas, G., Daniilidis, K.: Vision-based localization for leader–follower formation control. IEEE Trans. Robot. 25(6), 1431–1438 (2009)

    Article  Google Scholar 

  19. Mariottini, G.L., Pappas, G., Prattichizzo, D., Daniilidis, K.: Vision-based localization of leader-follower formations. In: Proc. 44Th IEEE Conference on Decision and Control, Seville, Spain, Pp. 635–640 (2005)

  20. Dani, A.P., Gans, N., Dixon, W.E.: Position-based visual servo control of leader-follower formation using image-based relative pose and relative velocity estimation. In: Proc. 2009 American Control Conference (ACC), St. Louis, MO, USA, pp. 5271–5276 (2009)

  21. Fidan, B., Gazi, V., Zhai, S., Cen, N., Karataş, E.: Single-view distance-estimation-based formation control of robotic swarms. IEEE Trans. Ind. Electron. 60(12), 5781–5791 (2012)

    Article  Google Scholar 

  22. Liang, X., Liu, Y.H., Wang, H., Chen, W., Xing, K., Liu, T.: Leader-following formation tracking control of mobile robots without direct position measurements. IEEE Trans. Autom. Control 61(12), 4131–4137 (2016)

    Article  MathSciNet  Google Scholar 

  23. Orqueda, O.A.A., Fierro, R.: Robust Vision-Based Nonlinear Formation Control. In: Proc. 2006 American Control Conference (ACC), Minneapolis, MN, USA, pp. 1422–1427 (2006)

  24. Poonawala, H., Satici, A.C., Gans, N., Spong, M.W.: Formation Control of Wheeled Robots with Vision-Based Position Measurement. In: Proc. 2012 American Control Conference (ACC), Montréal, Canada, pp. 3173–3178 (2012)

  25. Chaumette, F., Hutchinson, S.: Visual servo control. i. basic approaches. IEEE Robotics & Automation Magazine 13(4), 82–90 (2006)

    Article  Google Scholar 

  26. Miao, Z., Zhong, H., Wang, Y., Zhang, H., Tan, H., Fierro, R.: Low complexity leader-following formation control of mobile robots using only fov-constrained visual feedback. IEEE Transactions on Industrial Informatics, pp. 1–1 (2021)

  27. Miao, Z., Zhong, H., Lin, J., Wang, Y., Chen, Y., Fierro, R.: Vision-based formation control of mobile robots with fov constraints and unknown feature depth. IEEE Trans. Control Syst. Technol. 29(5), 2231–2238 (2021)

    Article  Google Scholar 

  28. Li, Z., Yuan, Y., Ke, F., He, W., Su, C.-Y.: Robust vision-based tube model predictive control of multiple mobile robots for leader–follower formation. IEEE Trans. Ind. Electron. 67(4), 3096–3106 (2019)

    Article  Google Scholar 

  29. Lin, J., Miao, Z., Zhong, H., Peng, W., Wang, Y., Fierro, R.: Adaptive image-based leader–follower formation control of mobile robots with visibility constraints. IEEE Trans. Ind. Electron. 68(7), 6010–6019 (2020)

    Article  Google Scholar 

  30. Allibert, G., Courtial, E., Chaumette, F.: Predictive control for constrained image-based visual servoing. IEEE Trans. Robot. 26(5), 933–939 (2010)

    Article  Google Scholar 

  31. Fallah, M.M.H., Ghazbi, S.N., Mehrkish, A., Janabi-Sharifi, F.: Depth-based visual predictive control of tendon-driven continuum robots. In: Proc. of IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Boston, MA, USA, pp. 488–494 (2020)

  32. Fallah, M.M.H., Janabi-Sharifi, F.: Linear position-based visual predictive control. In: Proc. of the Canadian Society for Mechanical Engineering International Congress, p 2020. Charlottetown, PEI, Canada (2020)

  33. Sauvée, M., Poignet, P., Dombre, E.: Ultrasound image-based visual servoing of a surgical instrument through nonlinear model predictive control. The International Journal of Robotics Research 27(1), 25–40 (2008)

    Article  Google Scholar 

  34. Sauvée, M., Poignet, P., Dombre, E., Courtial, E.: Image based visual servoing through nonlinear model predictive control. In: Proc. 45Th IEEE Conference on Decision and Control, San Diego, CA, USA, pp. 1776–1781 (2006)

  35. Economou, C.G., Morari, M., Bernhard, B.O.: Internal model control: Extension to nonlinear system. Industrial & Engineering Chemistry Process Design and Development 25(2), 403–411 (1986)

    Article  Google Scholar 

  36. Cervera, E., Pobil, A.P.D., Berry, F., Martinet, P.: Improving image-based visual servoing with three-dimensional features. The International Journal of Robotics Research 22(10-11), 821–839 (2003)

    Article  Google Scholar 

  37. Fallah, M.M.H., Janabi-Sharifi, F.: Conjugated visual predictive control for constrained visual servoing. Journal of Intelligent & Robotic Systems 101(26), 1–21 (2021)

    Google Scholar 

  38. Mayne, D.Q., Rawlings, J.B., Rao, C.V., Scokaert, P.O.: Constrained model predictive control: Stability and optimality. Automatica 36(6), 789–814 (2000)

    Article  MathSciNet  Google Scholar 

  39. Chaumette, F., Hutchinson, S.: Visual servo control. ii. advanced approaches [tutorial]. IEEE Robotics & Automation Magazine 14(1), 109–118 (2007)

    Article  Google Scholar 

  40. Sajjadi, S., Mehrandezh, M., Janabi-Sharifi, F.: A nonlinear adaptive model-predictive approach for visual servoing of unmanned aerial vehicles. In: Martínez-García, A., Bhattacharya, I., Otani, Y., Tutsch, R (eds.) Progress in Optomechatronic Technologies, pp 153–164. Springer, Singapore (2019)

  41. Crowther, B., Lanzon, A., Maya-Gonzalez, M., Langkamp, D.: Kinematic analysis and control design for a nonplanar multirotor vehicle. Journal of Guidance, Control, and Dynamics 34(4), 1157–1171 (2011)

    Article  Google Scholar 

Download references

Acknowledgments

This work was sponsored by National Sciences and Engineering Research Council of Canada (NSERC) through Discovery Grant #2017 06930.

Funding

This work was sponsored by National Sciences and Engineering Research Council of Canada (NSERC) through Discovery Grant #2017 06930.

Author information

Authors and Affiliations

Authors

Contributions

M. M. H. Fallah developed formulation and simulations, and authored the manuscript. F. Janabi-Sharifi supervised M. M. H. Fallah in problem formulation and control development, and participated in the authorship of the manuscript. S.Sajjadi contributed in formatting and editing of the paper and the simulations with the UAV section. M.Mehrandezh provided technical advise, reviewed and edited the paper.

Corresponding author

Correspondence to Farrokh Janabi-Sharifi.

Ethics declarations

Ethics approval

Not applicable

Consent to participate

Not applicable

Consent for Publication

Not applicable

Conflict of Interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Category(7),(3)

Appendix

Appendix

The motion dynamics of the UAV [41], equipped by camera can be described by the following equations,

$$\begin{array}{@{}rcl@{}} \dot{x_{1}} &=&x_{2}\\ \dot{x_{2}} &=&\left[\begin{array}{c} 0 \\ 0 \\ -g \end{array}\right]+\frac{1}{m} \left[\begin{array}{ccc} c_{\phi} c_{\psi}-c_{\theta} s_{\phi} s_{\psi} & -c_{\psi} s_{\phi}-c_{\phi} c_{\theta} s_{\psi} & s_{\theta} s_{\psi} \\ c_{\theta} c_{\psi} s_{\phi}+c_{\phi} s_{\psi} & c_{\phi} c_{\theta} c_{\psi}-s_{\phi} s_{\psi} & -c_{\psi} s_{\theta} \\ s_{\phi} s_{\theta} & c_{\phi} s_{\theta} & c_{\theta} \end{array}\right] \left[\begin{array}{c} f_{x} \\ f_{y} \\ f_{z} \end{array}\right]\\ \dot{x_{3}} &=&\left[\begin{array}{ccc} 1 & 0 & -s_{\theta} \\ 0 & c_{\phi} & c_{\theta} s_{\phi} \\ 0 & -s_{\phi} & c_{\theta} c_{\phi} \end{array}\right]^{-1} x_{4}\\ \dot{x_{4}} &=&\left[\begin{array}{c} \tau_{\phi} I_{x x}-1 \\ \tau_{\theta} I_{y y}-1 \\ \tau_{\psi} I_{z z}-1 \end{array}\right]-\left[\begin{array}{l} \frac{I_{y y}-I_{z z}}{I_{x x}} \omega_{y} \omega_{z} \\ \frac{I_{z z}-I_{x x}}{I_{y y}} \omega_{x} \omega_{z} \\ \frac{I_{x x}-I_{y y}}{I_{z z}} \omega_{x} \omega_{y} \end{array}\right] \end{array}$$
(18)

where, s and c refer to \(\sin \limits ()\) and \(\cos \limits {()}\) operators and x = (x1,x2,x3,x4) represents the state vector of the vehicle. Also, x1 = (x,y,z) and \(x_{2}=(\dot {x},\dot {y},\dot {z})\) are position and velocity of the UAV. The attitude of the UAV in the reference frame is denoted by x3 = (𝜃,ϕ,ψ), where 𝜃, ϕ, ψ denote the roll, pitch and yaw in the global frame. In addition, the angular velocity of UAV in the body frame is described by x4 = (ωx,ωy,ωz). It worths mentioning that the angular velocity vector \(\omega \neq \dot {\theta }\). In fact, the angular velocity is a vector pointing along the axis of rotation in the body frame, while \(\dot {\theta }\) represents time derivative of pitch, yaw and roll. The manipulated variable for controlling the pose of the vehicle is the force, \(\mathbf {F}=\left [\begin {array}{lll} f_{x} & f_{y} & f_{z} \end {array}\right ]^{\mathrm {T}}\), and torque, \(\mathbf {M}=\left [\begin {array}{lll} \tau _{\phi } & \tau _{\theta } & \tau _{\psi } \end {array}\right ]^{\mathrm {T}}\) vector generated by the propellers. A linear model can be formulated to relate each propeller’s angular velocity to generated force f and torque τ.

$$\begin{aligned} f &=K_{f} \cdot \omega^{2} \\ \tau &=K_{\tau} \cdot \omega^{2} \end{aligned}$$
(19)

where, Kf and Kτ are the force and torque coefficients, ω represent the propellers rotation velocity. The trasnfer function, T can be used to calculate the velocity of each propeller based on the required F and M vector calculated the controller.

$$\boldsymbol{\Omega}=T^{-1}\left[\begin{array}{l} \mathbf{F} \\ \mathbf{M} \end{array}\right]$$
$$\begin{aligned} T=\operatorname{diag}\left( \frac{\sqrt{3} K_{f} s_{\gamma}}{2}, \frac{K_{f} s_{\gamma}}{2}, K_{f} c_{\gamma}, \frac{L K_{f} c_{\gamma}}{2}, \frac{\sqrt{3} L K_{f} s_{\gamma}}{2},1 \right)\\ \left[\begin{array}{cccccc} 1 & 0 & -1 & 1 & 0 & -1 \\ -1 & 2 & -1 & -1 & 2 & -1 \\ -1 & -1 & -1 & -1 & -1 & -1 \\ 1 & 2 & 1 & -1 & -2 & -1 \\ 1 & 0 & -1 & -1 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right] \end{aligned}$$
$$\begin{aligned}\\ +\operatorname{diag}\left( 1,1,1, \frac{\sqrt{3} K_{\tau} s_{\gamma}}{2}, \frac{K_{\tau} s_{\gamma}}{2}, K_{\tau} c_{\gamma}\right) \left[\begin{array}{cccccc} 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ -1 & 0 & 1 & 1 & 0 & -1 \\ 1 & 2 & 1 & -1 & -2 & -1 \\ 1 & -1 & 1 & -1 & 1 & -1 \end{array}\right] \end{aligned}$$

where, the vector \(\boldsymbol {\Omega }=\left [\begin {array}{llllll} {\omega _{1}^{2}} & {\omega _{2}^{2}} & {\omega _{3}^{2}} & {\omega _{4}^{2}} & {\omega _{5}^{2}} & {\omega _{6}^{2}} \end {array}\right ]^{\mathrm {T}}\) can be formed by square of spinning velocity of each propeller.

Table 1 Dynamics and control parameters of the UAV

For simulations of the paper, an LQR controller was design by successively linearizing system dynamics using Jacobian in each control step. The successive linearization model-based LQR controller found to be effective in stabilizing and controlling the position and orientation of the proposed UAV. The rest of the dynamics and controller parameters used in the simulation can be found in Table 1.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fallah, M.M.H., Janabi-Sharifi, F., Sajjadi, S. et al. A Visual Predictive Control Framework for Robust and Constrained Multi-Agent Formation Control. J Intell Robot Syst 105, 72 (2022). https://doi.org/10.1007/s10846-022-01674-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-022-01674-5

Keywords

Navigation