Projects:2021s1-13311 Decentralised Control for Cooperative Task Execution in 3D Multi-agent System
In prior work a decentralised algorithm was developed to navigate a team of robotic agents through an obstacle course. The algorithm enabled the robotic agents to navigate and arrive simultaneously at multiple static target locations in a two-dimensional (2D) simulation environment. The objective of this project is to expand upon this algorithm by transitioning the algorithm to work for aerial robotic agents and include extensions to the algorithm, such as; making the agents arrive at a pre-specified angle and to allow for moving targets. The project then involves simulating the created algorithm using the SCRIMMAGE simulation software.
Contents
Introduction
This project is sponsored by the Defence Science and Technology Group (DST). Students will be working with staff at DST to support Australian Defence capabilities.
Project team
Project students
- Sebastian Fortuna
- James Roughan
Supervisors
- Duong Duc Nguyen
- Cheng Chew-Lim
- David Hubczenko (DST)
Aim
Prior work in the area has analysed a scenario consisting of 4 agents that must travel towards 4 static objectives in two-dimensions. This has been developed upon in the following ways:
- Expanded to three dimensions
- Enabled targeting of randomly moving objectives
- Implemented functionality to control the angle of impact in both azimuth and elevation
- Investigated limitations regarding the physical sensors and actuators used to control flight with the SCRIMMAGE toolbox; using a fixed-wing unmanned aerial vehicle (UAV) model.
Autonomy Algorithm
The existing algorithm used a proportional navigation (PN) based approach to guide the agents to the target. This maintains a constant line-of-sight (LOS) angle throughout the trajectory, ensuring collision with a moving target. This contrasts with pursuit guidance (PG) which acts to minimise the LOS angle so that the target is directly head-on from the agent. To guide the agent to collide at a specific impact angle additional terms were incorporated into the autonomy calculation. While effective the resultant trajectories often had the agent moving in wide arcs where targets left the field of view (FOV) of the agent. To solve this issue the more rudimentary PG approach was combined with PN to achieve a dual algorithm that used PN when the agent was facing near to the target, but reverted to PG when the target approached the edge of the FOV. This proved effective in providing impact angle control in both azimuth and elevation, while maintaining LOS with the target throughout the flight.
Simulation
The results from the SCRIMMAGE simulator were less productive - while in the two-dimensional plane, the drones track and pursue their targets effectively, they are unable to manage altitude effectively. This leads to minimal misses when the agents and targets are in the same plane, but a significant number of misses when the agents start out substantially higher than their targets. Even the successful hits often do so after a long downwards spiral, having reached the correct x and y coordinates while still above their targets. In areas not impacted by this issue, the algorithm is effective, and capable of using angle control in the azimuth plane. This is confirmed to be a problem with the ability of the simulated airframe to correctly respond to the commanded changes in altitude. These errors are theorised to be a product either of the PID controller being incorrectly tuned, or to the physical model of the UAV being insufficiently responsive when it comes to desired changes in altitude.
Project Outcomes
As a result of this project a three-dimensional version of the PN and PG guidance laws was produced, which functions under ideal circumstances to target randomly moving objectives at specified impact angles. However, this algorithm is less effective in the context of more realistic simulations. Inspection thus far has not yielded a simple fix, but it is theorised that further time and effort would result in a solution that better integrates the hardware and guidance dynamics. Following on from that, this would be a good base to begin implementing further improvements to the system which had been planned as potential future works, such as noisy and restricted sensor modules, or dedicated communication protocols between the agents. These features would increase the realism of the testing scenario and permit further refinement of the algorithm for later deployment in physical prototypes.