Mimicking Nature: Analysis of Dragonfly Pursuit Strategies Using LSTM and Kalman Filter

PDF (1365KB), PP.82-95

Views: 0 Downloads: 0

Author(s)

Mehedi Hassan Zidan 1 Rayhan Ahmed 1 Khandakar Anim Hassan Adnan 1 Tajkurun Zannat Mumu 1 Md. Mahmudur Rahman 1 Debajyoti Karmaker 1,*

1. American International University, Bangladesh

* Corresponding author.

DOI: https://doi.org/10.5815/ijitcs.2024.04.06

Received: 25 Feb. 2024 / Revised: 18 Apr. 2024 / Accepted: 5 Jun. 2024 / Published: 8 Aug. 2024

Index Terms

Predator, Prey, Pursuit, Strategy, Algorithm, Deep Learning, LSTM

Abstract

Pursuing prey by a predator is a natural phenomenon. This is an event when a predator targets and chases prey for consuming. The motive of a predator is to catch its prey whereas the motive of a prey is to escape from the predator. Earth has many predator species with different pursuing strategies. Some of them are sneaky again some of them are bolt. But their chases fail every time. A successful hunt depends on the strategy of pursuing one. Among all the predators, the Dragonflies, also known as natural drones, are considered the best predators because of their higher rate of successful hunting. If their strategy of pursuing a prey can be extracted for analysis and make an algorithm to apply on Unmanned arial vehicles, the success rate will be increased, and it will be more efficient than that of a dragonfly. We examine the pursuing strategy of a dragonfly using LSTM to predict the speed and distance between predator and prey. Also, The Kalman filter has been used to trace the trajectory of both Predator and Prey. We found that dragonflies follow distance maintenance strategy to pursue prey and try to keep its velocity constant to maintain the safe (mean) distance. This study can lead researchers to enhance the new and exciting algorithm which can be applied on Unmanned arial vehicles (UAV).

Cite This Paper

Mehedi Hassan Zidan, Rayhan Ahmed, Khandakar Anim Hassan Adnan, Tajkurun Zannat Mumu, Md. Mahmudur Rahman, Debajyoti Karmaker, "Mimicking Nature: Analysis of Dragonfly Pursuit Strategies Using LSTM and Kalman Filter", International Journal of Information Technology and Computer Science(IJITCS), Vol.16, No.4, pp.82-95, 2024. DOI:10.5815/ijitcs.2024.04.06

Reference

[1]W. Li, “A dynamics perspective of pursuit-evasion: Capturing and escaping when the pursuer runs faster than the agile evader,” IEEE Trans Automat Contr, vol. 62, no. 1, pp. 451–457, 2016.
[2]T. M. Bacastow, B. R. Cook, K.-C. Jim, and C. L. Giles, “Don’t move: the T-Rex effect in the predator-prey world,” in Proceedings of the second international joint conference on Autonomous agents and multiagent systems, 2003, pp. 924–925.
[3]G. S. Nitschke and L. H. Langenhoven, “Neuro-evolution for competitive co-evolution of biologically canonical predator and prey behaviors,” in 2010 Second World Congress on Nature and Biologically Inspired Computing (NaBIC), 2010, pp. 546–553.
[4]T. Francisco and G. M. dos Reis, “Evolving predator and prey behaviours with co-evolution using genetic programming and decision trees,” in Proceedings of the 10th annual conference companion on Genetic and evolutionary computation, 2008, pp. 1893–1900.
[5]E. Davies, “Which animal is the deadliest hunter on the planet.” 2015. [Online]. Available: http://www.bbc.com/earth/story/20151222-which-animal-is-the\\-deadliest-hunter-on-the-planet
[6]M. Y. Arafat and S. Moh, “Location-Aided Delay Tolerant Routing Protocol in UAV Networks for Post-Disaster Operation,” IEEE Access, vol. 6, pp. 59891–59906, 2018, doi: 10.1109/ACCESS.2018.2875739.
[7]S. Poudel, S. Moh, and J. Shen, “Residual energy-based clustering in UAV-aided wireless sensor networks for surveillance and monitoring applications,” Journal of Surveillance, Security and Safety, 2022, doi: 10.20517/jsss.2020.23.
[8]M. Y. Arafat and S. Moh, “Localization and Clustering Based on Swarm Intelligence in UAV Networks for Emergency Communications,” IEEE Internet Things J, vol. 6, no. 5, pp. 8958–8976, Oct. 2019, doi: 10.1109/JIOT.2019.2925567.
[9]S. M. A. Huda and S. Moh, “Survey on computation offloading in UAV-Enabled mobile edge computing,” Journal of Network and Computer Applications, vol. 201, p. 103341, May 2022, doi: 10.1016/j.jnca.2022.103341.
[10]S. Poudel and S. Moh, “Task assignment algorithms for unmanned aerial vehicle networks: A comprehensive survey,” Vehicular Communications, vol. 35, p. 100469, Jun. 2022, doi: 10.1016/J.VEHCOM.2022.100469.
[11]S. Poudel, M. Y. Arafat, and S. Moh, “Bio-Inspired Optimization-Based Path Planning Algorithms in Unmanned Aerial Vehicles: A Survey,” Sensors, vol. 23, no. 6, p. 3051, Mar. 2023, doi: 10.3390/s23063051.
[12]T. R. Beegum, M. Y. I. Idris, M. N. Bin Ayub, and H. A. Shehadeh, “Optimized Routing of UAVs Using Bio-Inspired Algorithm in FANET: A Systematic Review,” IEEE Access, vol. 11, pp. 15588–15622, 2023, doi: 10.1109/ACCESS.2023.3244067.
[13]A. Israr, Z. A. Ali, E. H. Alkhammash, and J. J. Jussila, “Optimization Methods Applied to Motion Planning of Unmanned Aerial Vehicles: A Review,” Drones, vol. 6, no. 5, p. 126, May 2022, doi: 10.3390/drones6050126.
[14]T. GUO, N. JIANG, B. LI, X. ZHU, Y. WANG, and W. DU, “UAV navigation in high dynamic environments: A deep reinforcement learning approach,” Chinese Journal of Aeronautics, vol. 34, no. 2, pp. 479–489, Feb. 2021, doi: 10.1016/j.cja.2020.05.011.
[15]H. Huang, Y. Yang, H. Wang, Z. Ding, H. Sari, and F. Adachi, “Deep Reinforcement Learning for UAV Navigation Through Massive MIMO Technique,” IEEE Trans Veh Technol, vol. 69, no. 1, pp. 1117–1121, Jan. 2020, doi: 10.1109/TVT.2019.2952549.
[16]A. Al-Kaff, F. García, D. Martín, A. De La Escalera, and J. Armingol, “Obstacle Detection and Avoidance System Based on Monocular Camera and Size Expansion Algorithm for UAVs,” Sensors, vol. 17, no. 5, p. 1061, May 2017, doi: 10.3390/s17051061.
[17]Y. Xue and W. Chen, “Combining Motion Planner and Deep Reinforcement Learning for UAV Navigation in Unknown Environment,” IEEE Robot Autom Lett, vol. 9, no. 1, pp. 635–642, Jan. 2024, doi: 10.1109/LRA.2023.3334978.
[18]K. Chikhaoui, H. Ghazzai, and Y. Massoud, “PPO-based Reinforcement Learning for UAV Navigation in Urban Environments,” in 2022 IEEE 65th International Midwest Symposium on Circuits and Systems (MWSCAS), IEEE, Aug. 2022, pp. 1–4. doi: 10.1109/MWSCAS54063.2022.9859287.
[19]M. H. Dickinson, “Motor control: how dragonflies catch their prey,” Current Biology, vol. 25, no. 6, pp. R232–R234, 2015.
[20]S. A. Combes, M. K. Salcedo, M. M. Pandit, and J. M. Iwasaki, “Capture success and efficiency of dragonflies pursuing different types of prey,” Integr Comp Biol, vol. 53, no. 5, pp. 787–798, 2013.
[21]K. R. Kramm, “Predators and Prey,” Am Biol Teach, vol. 37, no. 8, pp. 492–495, 1975, [Online]. Available: http://www.jstor.org/stable/4445369
[22]B. R. Anholt, D. Ludwig, and J. B. Rasmussen, “Optimal pursuit times: how long should predators pursue their prey?,” Theor Popul Biol, vol. 31, no. 3, pp. 453–464, 1987.
[23]M. Mischiati, H.-T. Lin, P. Herold, E. Imler, R. Olberg, and A. Leonardo, “Internal models direct dragonfly interception steering,” Nature, vol. 517, no. 7534, pp. 333–338, 2015, doi: 10.1038/nature14045.
[24]H.-T. Lin and A. Leonardo, “Heuristic rules underlying dragonfly prey selection and interception,” Current Biology, vol. 27, no. 8, pp. 1124–1137, 2017.
[25]Z. J. Wang, J. Melfi, and A. Leonardo, “Recovery mechanisms in the dragonfly righting reflex,” Science (1979), vol. 376, no. 6594, pp. 754–758, 2022, doi: 10.1126/science.abg0946.
[26]A. Bortoni, S. M. Swartz, H. Vejdani, and A. J. Corcoran, “Strategic predatory pursuit of the stealthy, highly manoeuvrable, slow flying bat Corynorhinus townsendii,” Proceedings of the Royal Society B: Biological Sciences, vol. 290, no. 2001, p. 20230138, 2023, doi: 10.1098/rspb.2023.0138.
[27]H. Wang and Z. Zhang, “Dragonfly visual evolutionary neural network: A novel bionic optimizer with related LSGO and engineering design optimization,” iScience, vol. 27, no. 3, p. 109040, Mar. 2024, doi: 10.1016/j.isci.2024.109040.
[28]Slate, “Epic Footage of Dragonflies Hunting.”
[29]S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, Nov. 1997, doi: 10.1162/neco.1997.9.8.1735.
[30]Q. Li, R. Li, K. Ji, and W. Dai, “Kalman Filter and Its Application,” in 2015 8th International Conference on Intelligent Networks and Intelligent Systems (ICINIS), 2015, pp. 74–77. doi: 10.1109/ICINIS.2015.35.