Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination

From Projects
Jump to: navigation, search

Abstract here

Introduction

Human beings are gifted with such a wonderful ability that one might subconsciously take for granted - the ability of collaboration. There exist tasks naturally-rooted in collaboration such as construction, transportation, orproduction. Such tasks require each individual to perform collaborative activities to satisfy the team goals in a more productively and efficiently manner. Coordination in a multi-robot system exhibits system flexibility and reliability where the failure of one or more robots does not affect the performance of the whole system. A task executed by a multi-robot system takes less execution time, computational resource, or energy consumption than at ask executed by a single robot. Executing a task, regardless its complexity, requires a robot to make proper decisions to optimally satisfy the requirements. Decisions are usually made by analysing data extracted from the sensors equipped in a robot. Recent advancements in the field of Artificial Intelligence (AI), along with the commercialisation of various high-end types of sensors, provide a robot with the ability to make complex decisions on its own in response to the surrounding environment. Furthermore, a joint-mission executed by a multi-robot system also requires information to be exchanged frequently among team members. The information exchange process allows one robot to make decisions in harmony with other team members to satisfy the team’s goal. Thus, it is required to establish a reliable communication channel within the team. Thus far, emerging technologies in a computer network - such as the Internet of Things - have enabled robust,efficientandscalableprotocolsforinternodecommunications. This research primarily focuses on the question: can the combination of AI, and information technology’s advancements, enable a multi-robot system to execute cooperative missions under disturbances in a robust, reliable and resilient manner?

Project team

Project students

  • Nha Nam Nguyen Nguyen
  • Zeping Zhao

Supervisors

  • Professor Cheng-Chew Lim

Advisors

  • DST Group

Objectives

In a rapidly-changing contested environment, a team of robots navigates, individually and collectively, through an area and arrives at a set of targets simultaneously. The team is loaded with a priori relevant to the contested operating environment. Each robot relies on its sensors to collect data on the surrounding environment, and uses AI techniques to process the data so that the priori is updated periodically to cope with the unpredictable changes in the operating environment. Throughout the mission,information exchange is required to collaborate with each robot’s local information so that the team’s navigation mission can be achieved. The navigation part will execute based on the map generated from Cooperative Mapping. In the event of member failure, the team coordinates the information from the active robots in a way that is adaptively reorganised to achieve the mission. To add another layer of guarantee to simultaneous arrival, every new navigation task must be synchronised or in other words, be activated at the same time-across the team members. From the scenario, several research objectives can be identified. The research aims to study various navigation techniques and apply the domain knowledge to simultaneous arrival missions. It also outlines different computer networking approaches to establish are liable, robust and scalable communication medium for a multi-robot system. This research also aims to develop a multi-robot coordination framework which serves as an abstract interface to handle task synchronisation and team member failures. With the objectives being stated, the research outputs a multi-robot testbed where all research’s studies and proposed techniques can be verified in a unified system.

Background

Common Background

Robot Operating System (ROS)

Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].

The node is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].

The message represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].

The topic means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].

Turtlebot3 Burger

For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].

Turtlebot3 Burger.png
Size of Burger.png
Final flow for ppt.png

Cooperative Mapping

Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.

Gmapping SLAM

SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11].

Wavefront Frontier Detector(WFD) Exploration

The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[] [12][13]

Map Merging

In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [14] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.

Cooperative Navigation

Simultaneous Arrival using Dynamic Window Approach. In a complex environment where dynamic obstacles may present,simultaneous arrival tasks are required to reliably perform object avoidance to cope with the uncertainties in the operating environment. All members must response quickly to the unanticipated changes in the environment,yet remain cooperation with others to guarantee a synchronised time of arrival. Dynamic Window Approach (DWA) - in combined with layered costmaps - is a commonly-used and well-documented algorithm as a motion planner for ROS navigation tasks. To the best of our knowledge up to the thesis submission time, DWA despite its reputation - has not been well-applied to the field of simultaneous arrival. In this section, a modified DWA is proposed to incorporate simultaneous arrival as a criterion in navigation tasks. The proposed technique is called Simultaneous Arrival using Dynamic Window Approach(SADWA).

Using the ROS Navigation Stack, with a map loaded, a robot can reliably perform navigation to the target. However, ROS Navigation Stack does not support multi-robot coordination. In this research, we modify the Navigation Stack to incorporate simultaneous arrival ability. By default, Dynamic Window Approach (or DWA) is used as a motion planner in the navigation stack. The robot generates a velocity search space which contains a set of velocities constrained by many criteria such as distance to obstacle or acceleration within the next time frame. DWA then selects an optimal translational and rotational velocity in its velocity search space by minimising its objective function. The velocity is used to safely drive the robot to the target. In this research, we propose a new technique that modifies the DWA to incorporate simultaneous arrival as another criterion in navigation. The basic idea behind simultaneous arrival is to have all robots in the team come to a time agreement on the arrival. This time agreement, in the context of simultaneous arrival, is called the consensus time. During the mission, as obstacles may be present, the estimated time of arrival might change as well. We choose the maximum time to target among the team at every instance of time to be the consensus time. Each robot will need to adjust its own velocity so that its time to target is as close to the consensus time as possible. We derived another objective function for simultaneous arrival and incorporated it into the original DWA. Simultaneous arrival using Dynamic Window Approach is just the matter of choosing a velocity such that the new objective function is minimised. As simultaneous arrival requires strictly information exchange, a sudden information loss could cause the team to incorrectly negotiate the consensus time. We propose an expiry removal mechanism, where the failed robot would be removed from the active robot’s list after being inactive for a specified amount of time. The team keeps executing the mission as if the failed robot never existed.

Distributed Multi-Robot Coordination Framework

In this section, a framework named Distributed Multi-Robot Coordination (DMRC) is proposed to provide a systematic framework for coordination tasks. It primarily addresses two functionalities in a coordination task: task synchronisation and member failure handling. This section firstly describes the implementation of task synchronisation, and later, discusses member failure handling to ensure the assigned missions are achieved despite events of a member failure.

Cooperative Mapping Testbed

The simulation Environment

The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.

The real world Environment

In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them.

Exploration boundary condition test

The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.

Map-Merging test

Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.

Cooperative Mapping Results and Evaluations

As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.

Results

To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.

ETA of 5 robots over time.
Trajectories under dynamic obstacles
Robot vs human in an environment

Simulation environment with dynamic obstacles

Conclusion

Cooperative Mapping

The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.


References

[1] Pyo, Y., Cho, H., Jung. R. & Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]

[2] Wang, H., Huang, M. & Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482. [zep]

[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.

[4]CLAES, D., HENNES, D., TUYLS, K. & MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.

[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.

[6]SARIFF, N. & BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.

[7]FOX, D., BURGARD, W. & THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics & Automation Magazine, 4, 23-33.

[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.

[9]CLAES, D., HENNES, D., TUYLS, K. & MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.

[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]

[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]

[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]

[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]

[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en. [zep]