Projects:2020s2-7311 Cooperative Multi-agent System to Achieve Multiple Synchronised Goals

From Projects
Jump to: navigation, search

Autonomous multi-agent system is the driving force in the advancement of industries with numerous real-life applications, including robotics and disaster rescue operations. It has been demonstrated that a team of three degrees of freedom (3-DoF) kinematic software agents can achieve multiple synchronised goals in a simulation environment without prior planning using a decentralised online learning algorithm and through performing dynamic agent-target assignments. The goals relate to logistical concepts that are widely useful such as achieving simultaneous arrival time at multiple-goal locations, adhering to pre-specified terminal arrival angles and being able to avoid obstacles.

Introduction

Project team

Project students

  • Sinuo Wang

Supervisors

  • Prof. Cheng-Chew Lim
  • Dr. Duong Duc Nguyen

Advisors

Objectives

  • .The objective of this project is to develop this algorithm prototype from software into a hardware demonstration where the agents retain the decentralised approach, through on-board processing and communicate with each other in a relatively sparse manner to accomplish the objectives.

The project will develop a technology demonstrator for cooperative multi-agent system to achieve multiple synchronised goals. For development and testing, an algorithm prototype (written in Matlab) will be provided. TurtleBots will be used. The robot is equipped with LiDAR, on-board processor and associated hardware, and supported by a range of robot operating system (ROS) packages and a Wi-Fi Ad Hoc network using BATMAN protocol.


Background

Traditional human operations or single autonomous agent missions show many limitations in real life applications, such as inadequate human operational abilities, and low success rates in single agent operation, etc. While a group of decentralised cooperative multi-agent enhances the overall system performance, including expanding the feasibilities spatially and temporally, more robust in a sense that does not suffer from “single point failure”, and providing reusability. Most importantly, in real operations, the lack of the environment information and the existence of uncertainties obsolete the prior-planning method.

Navigation Stack

The ROS navigation stack is powerful for mobile robots to move from place to place reliably. The objective of the navigation stack is to produce a safe path for the robot to execute, through processing data from odometry, sensors and environment map [1].The high level components of the navigation stack. The middle box is the core of the navigation stack, which is a node called move_base. It provides a ROS interface for configuring, running, and interacting with the navigation stack on a robot. The move_base package provides an implementation of an action that, given a goal in the world, will attempt to reach it with a mobile base[1]. The only one output of the move_base, that is the “cmd_vel”, the combination of linear and angular velocity commands acted on the wheels, computed from DWA local planner. Inside the move_base node, there are 5 essential plugins, global_planner, local_planner, global_costmap, local_costmap and recovery_behavior. Each has its own indispensable responsibility for the success of the navigation. move_base links them together to accomplish its global navigation task.

SLAM

SLAM (Simultaneous Localization and Mapping) is a technique to draw a map by estimating current location in an arbitrary space. The SLAM is a wellknown feature of TurtleBot from its predecessors.There are many open source packages available for performing the SLAM,such as Gmapping, Karto, Hector and the Cartographer developed by Google [2]. In this project, we are using the Gmapping package for SLAM.This package will allow users to fully serialize the data and pose-graph of the SLAM map to be reloaded to continue mapping, localize, merge, or otherwise manipulate. The in-built laser scanning LiDAR sensor and open source slam_gmapping packages ROS enable TurtleBot to perform simultaneous localisation and mapping. The SLAM function is used to scan the lab landform and stored into the PC for providing the map for future navigation. Open the visualisation simulation tool RViz and launch the turtlebot3_slam package. Control the movement of TurtleBot using keyboard tele operation package to scan the surrounding environment and aggregate into a map. The map obtained through SLAM is shown below.

SLAM Map of EM 306

Proportional Navigation Control Using Waypoint Method

The terms in the waypoint equations are all converted to user-defined values or the variables obtainable from the TurtleBot. The formula for the next waypoint y position is based on the same method. In this way, PN control is applied on TurtleBot navigation in a waypoint manner. The waypoints will be published on the topic of move_base/goal, containing geometry_msgs/PoseStamped messages to the move_base. The move_base uses the path planners mentioned above to generate a velocity combination to the topic /cmd_vel to the wheels of the TurtleBot so that it navigates the agent to the next waypoint.

Proportional Navigation Control Waypoint Implementation

The process in TurtleBot navigation Control:

PN control.jpg

Results

To test the validity of the integrated proportional navigation control method, 3 initial heading angles are used, which are 45, 90 and 0 degrees. This is realised by changing the initial x and y velocities. For each initial heading angle setting, applying a different line of sight angle by varying the destination location. Thus, we can observe the trajectory for different heading errors. The figures above show the navigation trajectories and the changes of heading angles over iterations.

Test Results with 45 Degree Initial Heading Angle for DIfferent Goals
Test Redults With 0 degrees Heading Angles for Different Goals

Conclusion

In this project,I developed a proportional navigation control integrated autonomous navigation into a real-life demonstration using TurtleBots.Through adjusting the lateral acceleration of proportional navigation control , the agent is able to autonomously navigate to the goals without prior planning.The idea of proportional navigation guidance law is inspired by the collision triangle. By correcting the flight path angle with respect to the line of sight angle, a linear feedback control system is formed. The Robot Operating System plays a key role in the project. ROS provides a platform with many high-performance open source packages, which lays a foundation for aggregating the PN control with the navigation stack. AMCL uses KLD-sampling algorithm to provide an estimation localisation into the tf tree. The drift caused by the encoder between the map frame and odom frame can be eliminated with the help of AMCL estimation. SLAM is another power function supported by ROS. slam_gmapping inputs the tf between the base frame and the odom frame and the scan information from laser sensor LiDAR to generate the map of the surroundings.The most powerful package is the navigation core i.e., move_base. move_base generates the surrounding costmaps into global and local planners for future path finding. In the global scope, the path planning using the algorithm of Dijkstra. Dijkstra uses the roadmap approach to convert the path finding into a graphic search method using the information of a grid cell map. In the local area, DynamicWindow Approach is picked as the local planner, which generates all the possible velocity combinations then iterates through the velocity search space and selects the combination with the lowest cost value. Finally, the velocity picked by theDWA is published to the cmd_vel and the velocity command will be acted on the wheels of the TurtleBot to perform the autonomous navigation. The PN control is integrated with the navigation stack using the waypoints method. After computing the lateral acceleration of the agent, the next waypoint after a certain time interval can be calculated using the basic dynamics physics formula, and then the position will be taken by move_base as the next goal until the current position is at the destination within a reasonable error range. All the test results and achievements verified the correct implementation of the integrated proportional navigation control algorithm. However, for the destinations with large heading errors, the full version of the proportional navigation guidance lateral acceleration formula is needed to be used.

References

[1] N. Fragale, "move_base - ROS Wiki", Wiki.ros.org, 2020. [Online]. Available: http://wiki.ros.org/move_base. [Accessed: 03- Jun- 2021]

[2] G. Hoorn, "gmapping - ROS Wiki", Wiki.ros.org, 2019. [Online]. Available: http://wiki.ros.org/gmapping. [Accessed: 03- Jun- 2021].