<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1693573</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1693573"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1693573"/>
	<updated>2026-04-24T19:42:09Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15024</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15024"/>
		<updated>2020-06-09T01:58:21Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Simultaneous Arrival */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Human beings are gifted with such a wonderful ability that one might subconsciously take for granted - the ability of collaboration. There exist tasks naturally-rooted in collaboration such as construction, transportation, orproduction. Such tasks require each individual to perform collaborative activities to satisfy the team goals in a more productively and eﬃciently manner. Coordination in a multi-robot system exhibits system ﬂexibility and reliability where the failure of one or more robots does not aﬀect the performance of the whole system. A task executed by a multi-robot system takes less execution time, computational resource, or energy consumption than at ask executed by a single robot. Executing a task, regardless its complexity, requires a robot to make proper decisions to optimally satisfy the requirements. Decisions are usually made by analysing data extracted from the sensors equipped in a robot. Recent advancements in the ﬁeld of Artiﬁcial Intelligence (AI), along with the commercialisation of various high-end types of sensors, provide a robot with the ability to make complex decisions on its own in response to the surrounding environment. Furthermore, a joint-mission executed by a multi-robot system also requires information to be exchanged frequently among team members. The information exchange process allows one robot to make decisions in harmony with other team members to satisfy the team’s goal. Thus, it is required to establish a reliable communication channel within the team. Thus far, emerging technologies in a computer network - such as the Internet of Things - have enabled robust,eﬃcientandscalableprotocolsforinternodecommunications. This research primarily focuses on the question: can the combination of AI, and information technology’s advancements, enable a multi-robot system to execute cooperative missions under disturbances in a robust, reliable and resilient manner?&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In a rapidly-changing contested environment, a team of robots navigates, individually and collectively, through an area and arrives at a set of targets simultaneously. The team is loaded with a priori relevant to the contested operating environment. Each robot relies on its sensors to collect data on the surrounding environment, and uses AI techniques to process the data so that the priori is updated periodically to cope with the unpredictable changes in the operating environment. Throughout the mission,information exchange is required to collaborate with each robot’s local information so that the team’s navigation mission can be achieved. The navigation part will execute based on the map generated from Cooperative Mapping. In the event of member failure, the team coordinates the information from the active robots in a way that is adaptively reorganised to achieve the mission. To add another layer of guarantee to simultaneous arrival, every new navigation task must be synchronised or in other words, be activated at the same time-across the team members. From the scenario, several research objectives can be identiﬁed. The research aims to study various navigation techniques and apply the domain knowledge to simultaneous arrival missions. It also outlines diﬀerent computer networking approaches to establish are liable, robust and scalable communication medium for a multi-robot system. This research also aims to develop a multi-robot coordination framework which serves as an abstract interface to handle task synchronisation and team member failures. With the objectives being stated, the research outputs a multi-robot testbed where all research’s studies and proposed techniques can be veriﬁed in a uniﬁed system.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [14] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Navigation ==&lt;br /&gt;
&lt;br /&gt;
Simultaneous Arrival using Dynamic Window Approach. In a complex environment where dynamic obstacles may present,simultaneous arrival tasks are required to reliably perform object avoidance to cope with the uncertainties in the operating environment. All members must response quickly to the unanticipated changes in the environment,yet remain cooperation with others to guarantee a synchronised time of arrival. Dynamic Window Approach (DWA) - in combined with layered costmaps - is a commonly-used and well-documented algorithm as a motion planner for ROS navigation tasks. To the best of our knowledge up to the thesis submission time, DWA despite its reputation - has not been well-applied to the ﬁeld of simultaneous arrival. In this section, a modiﬁed DWA is proposed to incorporate simultaneous arrival as a criterion in navigation tasks. The proposed technique is called Simultaneous Arrival using Dynamic Window Approach(SADWA). &lt;br /&gt;
&lt;br /&gt;
== Distributed Multi-Robot Coordination Framework ==&lt;br /&gt;
In this section, a framework named Distributed Multi-Robot Coordination (DMRC) is proposed to provide a systematic framework for coordination tasks. It primarily addresses two functionalities in a coordination task: task synchronisation and member failure handling. This section ﬁrstly describes the implementation of task synchronisation, and later, discusses member failure handling to ensure the assigned missions are achieved despite events of a member failure.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
[[File:Trajectory-obstacle.png|thumb|left|Trajectories under dynamic obstacles]]&lt;br /&gt;
[[File:Robot-vs-human.png|frameless|none|Robot vs human in an environment]]&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
[[File:ETA.png|thumb|ETA of 5 robots over time.]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en. [zep]&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15023</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15023"/>
		<updated>2020-06-09T01:50:57Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping Testbed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Human beings are gifted with such a wonderful ability that one might subconsciously take for granted - the ability of collaboration. There exist tasks naturally-rooted in collaboration such as construction, transportation, orproduction. Such tasks require each individual to perform collaborative activities to satisfy the team goals in a more productively and eﬃciently manner. Coordination in a multi-robot system exhibits system ﬂexibility and reliability where the failure of one or more robots does not aﬀect the performance of the whole system. A task executed by a multi-robot system takes less execution time, computational resource, or energy consumption than at ask executed by a single robot. Executing a task, regardless its complexity, requires a robot to make proper decisions to optimally satisfy the requirements. Decisions are usually made by analysing data extracted from the sensors equipped in a robot. Recent advancements in the ﬁeld of Artiﬁcial Intelligence (AI), along with the commercialisation of various high-end types of sensors, provide a robot with the ability to make complex decisions on its own in response to the surrounding environment. Furthermore, a joint-mission executed by a multi-robot system also requires information to be exchanged frequently among team members. The information exchange process allows one robot to make decisions in harmony with other team members to satisfy the team’s goal. Thus, it is required to establish a reliable communication channel within the team. Thus far, emerging technologies in a computer network - such as the Internet of Things - have enabled robust,eﬃcientandscalableprotocolsforinternodecommunications. This research primarily focuses on the question: can the combination of AI, and information technology’s advancements, enable a multi-robot system to execute cooperative missions under disturbances in a robust, reliable and resilient manner?&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In a rapidly-changing contested environment, a team of robots navigates, individually and collectively, through an area and arrives at a set of targets simultaneously. The team is loaded with a priori relevant to the contested operating environment. Each robot relies on its sensors to collect data on the surrounding environment, and uses AI techniques to process the data so that the priori is updated periodically to cope with the unpredictable changes in the operating environment. Throughout the mission,information exchange is required to collaborate with each robot’s local information so that the team’s navigation mission can be achieved. The navigation part will execute based on the map generated from Cooperative Mapping. In the event of member failure, the team coordinates the information from the active robots in a way that is adaptively reorganised to achieve the mission. To add another layer of guarantee to simultaneous arrival, every new navigation task must be synchronised or in other words, be activated at the same time-across the team members. From the scenario, several research objectives can be identiﬁed. The research aims to study various navigation techniques and apply the domain knowledge to simultaneous arrival missions. It also outlines diﬀerent computer networking approaches to establish are liable, robust and scalable communication medium for a multi-robot system. This research also aims to develop a multi-robot coordination framework which serves as an abstract interface to handle task synchronisation and team member failures. With the objectives being stated, the research outputs a multi-robot testbed where all research’s studies and proposed techniques can be veriﬁed in a uniﬁed system.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [14] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Simultaneous Arrival ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
[[File:Trajectory-obstacle.png|thumb|left|Trajectories under dynamic obstacles]]&lt;br /&gt;
[[File:Robot-vs-human.png|frameless|none|Robot vs human in an environment]]&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
[[File:ETA.png|thumb|ETA of 5 robots over time.]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en. [zep]&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15022</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15022"/>
		<updated>2020-06-09T01:49:57Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Human beings are gifted with such a wonderful ability that one might subconsciously take for granted - the ability of collaboration. There exist tasks naturally-rooted in collaboration such as construction, transportation, orproduction. Such tasks require each individual to perform collaborative activities to satisfy the team goals in a more productively and eﬃciently manner. Coordination in a multi-robot system exhibits system ﬂexibility and reliability where the failure of one or more robots does not aﬀect the performance of the whole system. A task executed by a multi-robot system takes less execution time, computational resource, or energy consumption than at ask executed by a single robot. Executing a task, regardless its complexity, requires a robot to make proper decisions to optimally satisfy the requirements. Decisions are usually made by analysing data extracted from the sensors equipped in a robot. Recent advancements in the ﬁeld of Artiﬁcial Intelligence (AI), along with the commercialisation of various high-end types of sensors, provide a robot with the ability to make complex decisions on its own in response to the surrounding environment. Furthermore, a joint-mission executed by a multi-robot system also requires information to be exchanged frequently among team members. The information exchange process allows one robot to make decisions in harmony with other team members to satisfy the team’s goal. Thus, it is required to establish a reliable communication channel within the team. Thus far, emerging technologies in a computer network - such as the Internet of Things - have enabled robust,eﬃcientandscalableprotocolsforinternodecommunications. This research primarily focuses on the question: can the combination of AI, and information technology’s advancements, enable a multi-robot system to execute cooperative missions under disturbances in a robust, reliable and resilient manner?&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In a rapidly-changing contested environment, a team of robots navigates, individually and collectively, through an area and arrives at a set of targets simultaneously. The team is loaded with a priori relevant to the contested operating environment. Each robot relies on its sensors to collect data on the surrounding environment, and uses AI techniques to process the data so that the priori is updated periodically to cope with the unpredictable changes in the operating environment. Throughout the mission,information exchange is required to collaborate with each robot’s local information so that the team’s navigation mission can be achieved. The navigation part will execute based on the map generated from Cooperative Mapping. In the event of member failure, the team coordinates the information from the active robots in a way that is adaptively reorganised to achieve the mission. To add another layer of guarantee to simultaneous arrival, every new navigation task must be synchronised or in other words, be activated at the same time-across the team members. From the scenario, several research objectives can be identiﬁed. The research aims to study various navigation techniques and apply the domain knowledge to simultaneous arrival missions. It also outlines diﬀerent computer networking approaches to establish are liable, robust and scalable communication medium for a multi-robot system. This research also aims to develop a multi-robot coordination framework which serves as an abstract interface to handle task synchronisation and team member failures. With the objectives being stated, the research outputs a multi-robot testbed where all research’s studies and proposed techniques can be veriﬁed in a uniﬁed system.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [14] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
[[File:Trajectory-obstacle.png|thumb|left|Trajectories under dynamic obstacles]]&lt;br /&gt;
[[File:Robot-vs-human.png|frameless|none|Robot vs human in an environment]]&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
[[File:ETA.png|thumb|ETA of 5 robots over time.]]&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en. [zep]&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15010</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15010"/>
		<updated>2020-06-09T01:27:44Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [14] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en. [zep]&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15007</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15007"/>
		<updated>2020-06-09T01:27:21Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [14] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15006</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15006"/>
		<updated>2020-06-09T01:26:42Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[14]J Hörner, “Map-merging for multi-robot system” Charles University in Prague, 2016. Available: https://is.cuni.cz/webapps/zzp/download/130183203/?lang=en.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15001</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=15001"/>
		<updated>2020-06-09T01:22:59Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14997</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14997"/>
		<updated>2020-06-09T01:21:15Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314. [zep]&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.  [zep]&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS): The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”. [zep]&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011. [zep]&lt;br /&gt;
&lt;br /&gt;
[12]A. Topiwala, et al. “Frontier Based Exploration for Autonomous Robot,” Cornell University Library, 2018, &lt;br /&gt;
Available: http://search.proquest.com/docview/2073376204/. [Access Nov 2019]. [zep]&lt;br /&gt;
[13]K. Verbiest, et al., “Autonomous Frontier Based Exploration for Mobile Robots,” &lt;br /&gt;
Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2015. Available: https://www.researchgate.net/publication/300559124_Autonomous_Frontier_Based_Exploration_for_Mobile_Robots [Access Nov 2019]. [zep]&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14993</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14993"/>
		<updated>2020-06-09T01:17:26Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[11]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]  [12][13]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14992</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14992"/>
		<updated>2020-06-09T01:15:21Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[3]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;br /&gt;
&lt;br /&gt;
[11]B. Mader, “Localization and Mapping with Autonomous Robots”, Radboud University Nijmegen, 2011.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14991</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14991"/>
		<updated>2020-06-09T01:14:16Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process [2]. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information)[3]. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14990</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14990"/>
		<updated>2020-06-09T01:13:11Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Turtlebot3 Burger */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities [1].&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14989</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14989"/>
		<updated>2020-06-09T01:12:52Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Robot Operating System (ROS) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[1].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [1]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [1].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [1].&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14988</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14988"/>
		<updated>2020-06-09T01:12:11Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [10]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14987</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14987"/>
		<updated>2020-06-09T01:11:51Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;br /&gt;
&lt;br /&gt;
[10]H.Durrant, W. Fellow, et al. “Simultaneous Localisation and Mapping (SLAM): Part I The Essential Algorithms”.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14986</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14986"/>
		<updated>2020-06-09T01:10:52Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Cooperative Mapping &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Cooperative Mapping consists of several processes inside of each robot, and also link to each other and Remote PC by Ad-hoc network communication. Each robot will start from its SLAM process to get the map of the surrounding environment based on the input Odometry and LiDAR information, then pass the generated map to exploration and map-merging process. In the exploration process, each robot explored the defined area to reduce the exploration time, the reduction rate of time is averagely 74.6% based on test results based on the Modified Wavefront Frontier Detector(WFD) exploration algorithm. After each robot published its own map to Remote PC, the map-merging topic will merge their map immediately. After each robot exploration process had been done, then they will stop, and the map-merging need to be stopped manually by the operator. In future work, the Modified WFD algorithm need to be optimized to suitable with as less as possible overlapping area and the sample size needs to be collected more. Besides, the Beacon should also be applied instead of setting the initial pose for each robot in the map-merging topic, which is easier for exploration topic as well.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14977</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14977"/>
		<updated>2020-06-09T00:58:19Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14975</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14975"/>
		<updated>2020-06-09T00:57:22Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results and Evaluations==&lt;br /&gt;
&lt;br /&gt;
As for the result and evaluation of the Cooperative Mapping, the generated merged map and its component maps in the right-hand side are shown as []... The left-hand side is the simulation environment in the gazebo, and the related visible map in Rviz. Unfortunately, the sample size of cooperative mapping is not large enough, therefore the chart is not followed by any distribution, but it still can be identified for the reduction exploration time rate as the shown table, which is 74.6% for the average exploration time for single-bot and cooperative case.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14950</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14950"/>
		<updated>2020-06-09T00:44:33Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Results ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14949</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14949"/>
		<updated>2020-06-09T00:42:17Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping Testbed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The simulation Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The simulation test tool in this project used gazebo as a virtual environment for testing and constructing based on the physical environment arrangement, and Rviz is a visible simulation tool used to monitor robots’ behaviours. These graphs shown that the occupancy grid map was detected by the robot is the same as it in the gazebo environment. The gazebo is also a great simulation tool, one main advantage of it is to provide a common fixed coordinate frame to all robots in the simulation environment, but in the physical world, robots need to pre-set, further information and importance of robot’s coordinates pre-set will be discussed in the following real-world part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;The real world Environment&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the real world, these visible simulation tools still used for testing and debugging except gazebo, because the main function of the gazebo is to simulate the environment. But when robots can be tested in real-world, the gazebo will not be used anymore. Different from the robot in the gazebo simulation environment, robots in the physical environment do not have the common fixed reference frame between them. &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Exploration boundary condition test&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The Exploration boundary condition was depended on the experiment environment. In the simulation environment, initial poses could be set before they will run in the gazebo environment. If only two robots explore the whole unknown environment, and their initial poses were set side by side with the same orientation but different from the y-axis. However, in the physical environment, the initial pose of each robot could not be set because they are not included in a common reference frame. This problem could be solved by Beacon. But without the beacon, the pose of each robot exploration is hard to set and test. The test for zero degree yaw angle had been completed and worked well, but other angles are not.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039; Map-Merging test &amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
Due to the initial pose of robot is important in both physical and simulation environment, the map merging topic supports a common fix reference frame to both robots, which could be used instead of the gazebo environment. Once two robots had been set their initial pose in the physical environment, then the final merged map generated from Map-Merging would give the merging map of robots. If the initial pose of each robot is not be set correctly, then the merged map will be stranged.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14935</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14935"/>
		<updated>2020-06-08T23:52:25Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Cooperative Mapping Testbed ==&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14934</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14934"/>
		<updated>2020-06-08T23:51:20Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Map Merging&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
In the Remote PC, it would subscribe maps that came from each robot as topics published to the ROS master node. After the map was subscribed from each robot, it would be processed in the map-merging topic, then published as the final result merged map to all robots prepared for the following tasks. This map-merging process has necessarily relied on single-bot exploration, and it would search the name of the map generated by each robot from the topic list, then subscribed it immediately. For each subscribed map, the map-merging topic would do the following steps to process two maps. The merging steps are: Input map from each robot by using Ad-hoc Network Communication, then use Oriented Fast and Rotated BRIEF(ORB) feature detection, the proposed image matching algorithm by [BPTX] and general image stitching algorithm in OpenCV. Then output the merged map to Simultaneous Arrival task.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14930</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14930"/>
		<updated>2020-06-08T23:43:41Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector(WFD) Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination. Due to the aim of Cooperative Mapping is to reduce the exploration time, therefore, the WFD exploration method had been modified to explore the limited area instead of the whole unknown environment. ...[]&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14926</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14926"/>
		<updated>2020-06-08T23:38:29Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Cooperative Mapping ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14925</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14925"/>
		<updated>2020-06-08T23:38:18Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ==&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14924</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14924"/>
		<updated>2020-06-08T23:37:37Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14923</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14923"/>
		<updated>2020-06-08T23:37:19Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14921</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14921"/>
		<updated>2020-06-08T23:36:38Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system aims to reduce the exploration time to get the environment map instead of single mapping and it will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Gmapping SLAM&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping. Basically, the SLAM process used for robot builds for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, the pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Wavefront Frontier Detector Exploration&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
The exploration process in this project is to automatic explore the unknown surrounding environment by robots instead of manually SLAM by a human. The exploration process could be implemented in both single-robot mapping and cooperative mapping explore. The frontier-based exploration approach would be used for detecting the frontiers for the basic principle of exploration. And the process of exploration could be separated into SLAM and Navigation parts, by using gmapping and DWA path planner respectively. For the SLAM part, gmapping supported an occupancy grid map for exploration approach to find out where are the frontiers and then set the destination of navigation between frontier markers. Further frontier information will be discussed in the following content. Once the destination had been set, then the robot would be navigated to that place based on the default DWA path planner, which divided into local path planner and global path planner. The global path planner means the final destination in this current navigation step but the real final destination, and the local path planner could be regarded as an assistant that helps the robot to avoid any obstacle during the robot along the path to the destination.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14917</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14917"/>
		<updated>2020-06-08T23:18:40Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:Final flow for ppt.png|thumb]]&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
Cooperative Map-Merging for the multi-robot system will separate to several processes, which input Odometry data that collected from IMU and encoder of each robot with the input scan data that collected from Laser Scan Sensor, will experience the SLAM and Exploration, to generate an individual detected map that published to the ROS master node. Then those maps were subscribed by map-merging topic and did some processes about image processing with OpenCV’s help. Finally published the final merged map to my partner using for multiple robots’ navigation. The detail of the total flow chart shown in Figure , and the whole process will be discussed in the following part.&lt;br /&gt;
&lt;br /&gt;
Gmapping SLAM&lt;br /&gt;
&lt;br /&gt;
SLAM means Simultaneous Localization and Mapping, which means it had two processes together: Localization and Mapping.Basically, the SLAM process used for robot builds the for the robot itself to navigate or explore. The Localization part of the SLAM means when the robot is building its map based on landmarks and it also localized itself based on those landmarks at the same time [15]. Grid mapping(Gmapping) is the SLAM algorithm had been adapted in this project.Gmapping is different with general SLAM approach based on Rao-Blackwellized particle filter (RBPF), the particle, which means the pose(position and orientation) of robot here, was the main variable in its process. In general SLAM algorithm, the pose relied on the map, and vice versa. But in gmapping, pose was determined by observation(LiDAR information) and control (Odometry information). &lt;br /&gt;
&lt;br /&gt;
Wavefront Frontier Detector Exploration&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Final_flow_for_ppt.png&amp;diff=14914</id>
		<title>File:Final flow for ppt.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Final_flow_for_ppt.png&amp;diff=14914"/>
		<updated>2020-06-08T23:10:41Z</updated>

		<summary type="html">&lt;p&gt;A1693573: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Structure of the Cooperative Mapping&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14913</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14913"/>
		<updated>2020-06-08T23:09:00Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Turtlebot3 Burger */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
[[File:Size of Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Size_of_Burger.png&amp;diff=14912</id>
		<title>File:Size of Burger.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Size_of_Burger.png&amp;diff=14912"/>
		<updated>2020-06-08T23:08:45Z</updated>

		<summary type="html">&lt;p&gt;A1693573: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The size of Turtlebot3 Burger&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14911</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14911"/>
		<updated>2020-06-08T23:07:40Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Turtlebot3 Burger */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
[[File:Turtlebot3 Burger.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Turtlebot3_Burger.png&amp;diff=14909</id>
		<title>File:Turtlebot3 Burger.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Turtlebot3_Burger.png&amp;diff=14909"/>
		<updated>2020-06-08T23:07:25Z</updated>

		<summary type="html">&lt;p&gt;A1693573: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This figure is convenient to illustrate the hardware part of the robot&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14907</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14907"/>
		<updated>2020-06-08T23:04:54Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Turtlebot3 Burger */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14906</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14906"/>
		<updated>2020-06-08T23:01:59Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== Robot Operating System (ROS) ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;node&amp;#039;&amp;#039;&amp;#039; is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;message&amp;#039;&amp;#039;&amp;#039; represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
&lt;br /&gt;
The &amp;#039;&amp;#039;&amp;#039;topic&amp;#039;&amp;#039;&amp;#039; means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
For the hardware part of the robot, there are many kinds of robots can be used, but the turtlebot3 is the best one that is suitable for this project. Turtlebot3 is the most efficient and economic selection with the low-cost and embedded necessary functionalities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14904</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14904"/>
		<updated>2020-06-08T22:58:41Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Common Background&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
=== ROS ===&lt;br /&gt;
Robot Operating System (ROS) is one kind of popular platform for robot software as its name but not only the operating system but also provides different development environments for developing the application programs of robots[2].ROS combined the largest community of the world so that several developers or related talent people can contribute their findings or solutions to share with or solve for worldwide easily rather than the problem cannot be solved or shared within the limited university or company range [2]. The ROS can even be modified for the industrial to do some cooperative tasks with the closed source. Besides, the ROS will support the good debugging tools, rqt, it is a 2-dimensional visualisation tool to reflect any error on the robot when it is running. &lt;br /&gt;
The master supporting a name server for all node’s connections and messages communication. It is necessary to be used before running program, otherwise they need a connection between nodes and messages [2].&lt;br /&gt;
The node is the smallest processor unit in ROS, which means one program designed for one functionality. These nodes can exchange information based on topics based on communicating with the Master node followed by TCP/IP protocol [2].&lt;br /&gt;
The message represents the data that communicate between nodes. It has the data type, such as integer, Boolean and float point. Message only allowed to unidirectional from one node to another by subscribing or publishing [2].&lt;br /&gt;
The topic means the function detail that each node will do, just like a topic in conversation. Each topic must have at least a subscriber and publisher, the publisher responsible for publishing the message to the master node or subscriber node. And Subscriber means the node that will receive the message from the publisher or master node [2].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
&lt;br /&gt;
=== &amp;#039;&amp;#039;&amp;#039;Cooperative Mapping&amp;#039;&amp;#039;&amp;#039; ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14902</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14902"/>
		<updated>2020-06-08T22:48:13Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Common Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Common Background ===&lt;br /&gt;
&lt;br /&gt;
=== ROS ===&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
&lt;br /&gt;
=== Cooperative Mapping ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14901</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14901"/>
		<updated>2020-06-08T22:48:00Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Common Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Common Background ===&lt;br /&gt;
=== ROS ===&lt;br /&gt;
&lt;br /&gt;
=== Turtlebot3 Burger ===&lt;br /&gt;
&lt;br /&gt;
=== Cooperative Mapping ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14900</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14900"/>
		<updated>2020-06-08T22:47:37Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Common Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Common Background ===&lt;br /&gt;
&amp;lt;small&amp;gt; ROS &amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt; Turtlebot3 Burger &amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cooperative Mapping ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14899</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14899"/>
		<updated>2020-06-08T22:47:22Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Common Background ===&lt;br /&gt;
&amp;lt;small&amp;gt;=== ROS ===&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt;=== Turtlebot3 Burger ===&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Cooperative Mapping ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14895</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14895"/>
		<updated>2020-06-08T22:39:05Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Cooperative Mapping */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Cooperative Mapping ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14857</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14857"/>
		<updated>2020-06-08T15:51:10Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Topic 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
The main robot we are using is the Turtlebot3 Burger. It is a small, inexpensive robot used in research, education, or product prototyping. The Turtlebot3 Burger is equipped with a LiDAR which can be used for obstacle avoidance with distance of up to 3.5 meters. The computer used in the Turtlebot3 Burger is the Raspberry Pi Model 3B+. The Turtlebot3 Burger is a ROS-based robot. ROS, or Robot Operating System, is the main software platform used in this research. It is an open-source framework used for robotics software development. ROS supports many programming languages such as C++, Python, Java, MATLAB, along with various tools and libraries for robotics development. To perform a navigation task using ROS, the Navigation stack is used. It mainly addresses two problems: global planning and motion planning. Global planning is the process of pre-plan a path from the starting point to the destination. Motion planning, which takes the input from global planning, is the process in which the optimal velocity is chosen to drive the robot safely to the destination.&lt;br /&gt;
&lt;br /&gt;
In our research, we use an ad hoc network as the wireless connection between robots. An ad hoc network allows nodes to join and communicate with each other without pre-existing infrastructure such as routers or access points.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14856</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=14856"/>
		<updated>2020-06-08T15:49:58Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission.  To save the victim’s life, a team of robots is tasked with the mission that transports medical equipment to the victim’s location. Each robot transports a type of medical equipment, and all are required to be at the victim’s location at the same time so that the victim has enough equipment to save his life. To complete the mission, the team must navigate to the victim, avoid any obstacles along the way, and arrive at the victim’s location simultaneously. Such a task is called simultaneous arrival.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12931</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12931"/>
		<updated>2019-09-23T02:29:09Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission. Our project is motivated by the need of coordinated operation where a task is efficiently carried out faster by multiple robots.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] Pyo, Y.,  Cho, H.,  Jung. R. &amp;amp; Lim, T. 2017. ROS Robot Programming, ROBOTIS Co.,Ltd. pp.10-314.&lt;br /&gt;
&lt;br /&gt;
[2] Wang, H., Huang, M. &amp;amp; Wu, D. 2019 A Quantitative Analysis on Gmapping Algorithm Parameters Based on Lidar in Small Area Environment, Springer Nature Singapore Pte Ltd. pp.482.&lt;br /&gt;
&lt;br /&gt;
[3]KOUBAA, A. (2019). Robot Path Planning and Cooperation. [S.l.]: SPRINGER, p.4.&lt;br /&gt;
&lt;br /&gt;
[4]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision Avoidance under Bounded Localization Uncertainty. 2012 Ieee/Rsj International Conference on Intelligent Robots and Systems (Iros), 1192-1198.&lt;br /&gt;
&lt;br /&gt;
[5]Thrun, S., Burgard, W. and Fox, D. (2010). Probabilistic robotics. Cambridge, Mass.: MIT Press, pp.263-265.&lt;br /&gt;
&lt;br /&gt;
[6]SARIFF, N. &amp;amp; BUNIYAMIN, N. 2006. An Overview of Autonomous Mobile Robot Path Planning Algorithms. 2006 4th Student Conference on Research and Development.&lt;br /&gt;
&lt;br /&gt;
[7]FOX, D., BURGARD, W. &amp;amp; THRUN, S. 1997. The dynamic window approach to collision avoidance. IEEE Robotics &amp;amp; Automation Magazine, 4, 23-33.&lt;br /&gt;
&lt;br /&gt;
[8]KOUBAA, A. 2017. Robot Operating System (ROS) : The Complete Reference (Volume 2), Cham, Cham: Springer International Publishing AG.&lt;br /&gt;
&lt;br /&gt;
[9]CLAES, D., HENNES, D., TUYLS, K. &amp;amp; MEEUSSEN, W. 2012. Collision avoidance under bounded localization uncertainty.&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12930</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12930"/>
		<updated>2019-09-23T02:27:41Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission. Our project is motivated by the need of coordinated operation where a task is efficiently carried out faster by multiple robots.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
To achieve our proposed approach, we will present our project plan for 2 semesters. The first stage is to get ourselves familiar with ROS concepts and install a ROS working environment. During this stage, we also conduct a literature review to identify the potential approaches to our projects. This also is the stage we design a base environment that the whole project will be tested on. The second stage will be about navigate a single robot to the destination. Zeping will be working on building maps of the environment and I will be responsible for localisation and navigation. After each unit is completed, we will perform system integration and conduct testing on the physical environment. It marks the end of the project plan for this semester. Next semester, we will work on the robot coordination. It is planned that Zeping will be in charge of establishing a distributed network among robots, while I will be working on the coordinated navigation. The deliverables for semester 2 are the completed prototype with all project aims satisfied and a thesis.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ...&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12929</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12929"/>
		<updated>2019-09-23T02:27:18Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission. Our project is motivated by the need of coordinated operation where a task is efficiently carried out faster by multiple robots.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ...&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12928</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12928"/>
		<updated>2019-09-23T02:27:09Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission. Our project is motivated by the need of coordinated operation where a task is efficiently carried out faster by multiple robots.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
In, conclusion, we would like to report on the progress of our project up till now. We have built several environment maps with different parameters to provide a basis for quantifying the mapping performance. We also have done developing the navigation function of a single robot. A robot now has the ability to localise itself, and move to a specified location within the map. We are on the way of optimising the algorithm parameters to choose the best fit for our project. What has been done are tested in both physical and simulation environment.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ...&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12927</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12927"/>
		<updated>2019-09-23T02:25:47Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission. Our project is motivated by the need of coordinated operation where a task is efficiently carried out faster by multiple robots.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
For a robot to know where it is in a map, Adaptive Monte Carlo Localisation also known as AMCL is an improved version of Monte Carlo Localisation is used. Monte Carlo Localisation generates a large number of samples to represent the robot position likelihood. Those samples later will converge to the most likely robot position  based on the sensor data. AMCL can decrease the number of samples needed once the robot is localised, which saves a lot of computational resources.&lt;br /&gt;
To navigate to a destination, the combination of A* and Dynamic Windows Approach is used. A* is a global path planning algorithm to find the shortest path to a destination using heuristic, which allows a generally faster execution time compared with the exact path planning technique such as Dijkstra algorithm. Dynamic Window Approach is a local path planning technique for calculating the optimal velocity reach the destination follow the global path while avoiding obstacles that can possibly collide with the robot in the velocity search space.&lt;br /&gt;
The information exchange mechanism within a team of robots can be divided into centralised and distributed coordination. Centralised coordination allows all robots to operate through a single leader, which can not guarantee the robustness even though it is easy to implement. For example, the robot team could fail to execute tasks if the leader was not functioned properly. The distributed coordination provides a more flexible approach to multi-robot communication, which allows each robot to operate on its own without a leader.&lt;br /&gt;
To be able to avoid other robots while moving, Collision Avoidance with Localisation Uncertainty or CALU, is a relatively new technique based on Velocity Obstacles that provides a fully distributed communication among robots.  Velocity Obstacles maintains a set of velocities that potentially lead to collision. CALU will choose velocities outside of the collision velocity range to avoid obstacles and use bounded AMCL to provide each robot with the localisation information of others.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ...&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12926</id>
		<title>Projects:2019s2-23101 AI Enabled Information Technologies for Multi-Robot Coordination</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23101_AI_Enabled_Information_Technologies_for_Multi-Robot_Coordination&amp;diff=12926"/>
		<updated>2019-09-23T02:24:11Z</updated>

		<summary type="html">&lt;p&gt;A1693573: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2018s1|106]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robots nowadays are responsible for various complex tasks. For example, consider the case in which robots are tasking with exploring a nuclear power plant to carry out a rescue mission. Our project is motivated by the need of coordinated operation where a task is efficiently carried out faster by multiple robots.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Nha Nam Nguyen Nguyen&lt;br /&gt;
* Zeping Zhao&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
* DST Group&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
In the context of our project, coordination can be demonstrated at two levels. First is at the physical level, which allows each robot to navigate obstacles within a dynamic environment to satisfy team goals. The second is at the organisational level, where coordination distributes information include cognition, cooperative behaviours and trust. Consider a rapidly-changing environment, a team of robot must navigate to a destination based on the given priori knowledge, for example, a map. Each robot will need to collect and process new surrounding information to coordinate information within the team and determine the optimal approach to the team goal.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
As the robot has to carry out a task under uncertainties due to the changing of environment, these project aims lead to several challenges. The first challenge is navigation problem, where a robot must know where it is in the map, plan the optimal path and avoid any obstacles along the way to the destination. We human use our perfect perception system to navigate ourselves. However, robots use sensors to support with navigation, and no such sensors are perfect. The presence of noise in the sensor system will add uncertainties to the carrying tasks. Another challenge is selecting the robot information exchange mechanism within the team. What would the team react in case a robot did not function properly or the network connection was lost? Under such case, it is required to have an information technology to dynamically exchange information based on the operating condition to satisfy the team goal. Finally, is the challenge of how to avoid other robots during navigation task. Generally, we can not consider a robot as a moving obstacle because in the worst case, both robots will try to avoid by moving to the same side, and then to another side. By the way, it is called the reciprocal dance in robotics.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ...&lt;/div&gt;</summary>
		<author><name>A1693573</name></author>
		
	</entry>
</feed>