<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1789638</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1789638"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1789638"/>
	<updated>2026-04-22T03:33:04Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17873</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17873"/>
		<updated>2022-06-08T03:30:55Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&amp;lt;br&amp;gt;&lt;br /&gt;
1. Autonomous System&amp;lt;br&amp;gt;&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&amp;lt;br&amp;gt;&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Centralised, Decentralised and Hybrid control structure.&amp;lt;br&amp;gt;&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using MATLAB and robot interface. Wi-Fi acts as a main contribution on communication as it connects the MATLAB and robot interface together. This includes the implementation of TCP/IP protocol to connect one another. There are two protocols available and chosen protocol is TCP/IP protocol, this is due because its having better control and at the same time has lower risks of failure while working. In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
Regarding the hardware simulation, the communication of the rover it can be proven in the below gif. Both scene simulates the hardware simulation for multi rovers system where they&amp;#039;re working together in performing the task. The idea behind this scene is that one rover starts to move when it reaches and near any current rovers. This simulate the coordinates exhange for real-hardware simulation.&lt;br /&gt;
Need to attach the video and picture&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17872</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17872"/>
		<updated>2022-06-07T21:57:11Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&amp;lt;br&amp;gt;&lt;br /&gt;
1. Autonomous System&amp;lt;br&amp;gt;&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&amp;lt;br&amp;gt;&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Centralised, Decentralised and Hybrid control structure.&amp;lt;br&amp;gt;&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
Regarding the hardware simulation, the communication of the rover it can be proven in the below gif. Both scene simulates the hardware simulation for multi rovers system where they&amp;#039;re working together in performing the task. The idea behind this scene is that one rover starts to move when it reaches and near any current rovers. This simulate the coordinates exhange for real-hardware simulation.&lt;br /&gt;
Need to attach the video and picture&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17871</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17871"/>
		<updated>2022-06-07T16:20:36Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&amp;lt;br&amp;gt;&lt;br /&gt;
1. Autonomous System&amp;lt;br&amp;gt;&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&amp;lt;br&amp;gt;&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Centralised, Decentralised and Hybrid control structure.&amp;lt;br&amp;gt;&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
Regarding the hardware simulation, the communication of the rover it can be proven in the below gif. Both scene simulates the hardware simulation for multi rovers system where they&amp;#039;re working together in performing the task. The idea behind this scene is that one rover starts to move when it reaches and near any current rovers. This simulate the coordinates exhange for real-hardware simulation.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17855</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17855"/>
		<updated>2022-06-07T14:17:17Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&amp;lt;br&amp;gt;&lt;br /&gt;
1. Autonomous System&amp;lt;br&amp;gt;&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&amp;lt;br&amp;gt;&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Centralised, Decentralised and Hybrid control structure.&amp;lt;br&amp;gt;&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17854</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17854"/>
		<updated>2022-06-07T14:16:55Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&amp;lt;br&amp;gt;&lt;br /&gt;
1. Autonomous System&amp;lt;br&amp;gt;&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&amp;lt;br&amp;gt;&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Centralised,Decentralised and hybrid control structure.&amp;lt;br&amp;gt;&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17851</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17851"/>
		<updated>2022-06-07T14:07:54Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. Centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17850</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17850"/>
		<updated>2022-06-07T14:07:35Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
Centralised&lt;br /&gt;
* Centralised control structures mean relying on one individual or input to make decisions and provide direction for the system. Current achievement and studies are already proven the works are able to be done. The achievement is relating to multi-agent where it can perform tasks cooperatively with the help of decision making by the computer.&lt;br /&gt;
* Centralised control is weaker than distributed control in terms of scalability, adaptability, and flexibility [4].&lt;br /&gt;
&lt;br /&gt;
Decentralised&lt;br /&gt;
* Decentralised control structures mean not replying on any individual or input to make decision or provide direction the system. In order for the system to works independently, the system needs to have the ability to make decision on its own correspond with current situation.&lt;br /&gt;
&lt;br /&gt;
Hybrid&lt;br /&gt;
* Hybrid control structures mean that the system implementing both centralised and decentralised in one system. Hence, the system able to work with or without any input.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17849</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17849"/>
		<updated>2022-06-07T14:05:43Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17848</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17848"/>
		<updated>2022-06-07T14:04:28Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&amp;lt;br&amp;gt;&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|handle|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17847</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17847"/>
		<updated>2022-06-07T13:51:41Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&amp;lt;br&amp;gt;&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17846</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17846"/>
		<updated>2022-06-07T13:50:29Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
=== Background ===&amp;lt;br&amp;gt;&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17845</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17845"/>
		<updated>2022-06-07T13:50:07Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&amp;lt;br&amp;gt;&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|center|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|center|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|center|Figure 3: Multi-agent system structure [3]]]&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17844</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17844"/>
		<updated>2022-06-07T13:48:25Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Develop a distributed multi-agent system  where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coppeliasim(V-Rep) and MATLAB. All the agents in the system should be able to communicate with each other along-with performing the other sub functions . The robots should be able to reach to the target location with all of them working simuntaneously.&lt;br /&gt;
&lt;br /&gt;
== Background ==&amp;lt;br&amp;gt;&lt;br /&gt;
This section explains details on the background of the project which includes hardware and software tools used for the projects.&lt;br /&gt;
1. Autonomous System&lt;br /&gt;
The term “autonomous system” refers to a system that can perceive its surroundings, execute self-correction, adapt to new or unknown environments. There are a few requirements for the system to be self-contained. Without any input from the control centre, the system must be able to gather information about the environment and surroundings. The system must also function without human intervention and be capable of self-maintenance [1].&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:Autonomous system applications.png|thumb|Figure 1: Autonomous system applications (https://www.sohu.com/a/405012983_391530)]]&lt;br /&gt;
&lt;br /&gt;
2. Agent and Multi-Agent System&lt;br /&gt;
In general, “agent” is anything that can be viewed as perceiving its environment using sensors and acting upon that environment through effectors [2], whereas a “multi-agent” is a group of agents that work during that situation. In an open and distributed environment, agents also are sophisticated computer programs that act autonomously on behalf of their users. Besides, the usage of multi-agent system enhances overall system performance, specifically along the dimensions of computational efficiency, reliability, extensibility, robustness, maintainability, responsiveness, and flexibility compared to single agent system.&lt;br /&gt;
[[File:Agent structure.jpg|thumb|Figure 2: Agent structure]]&lt;br /&gt;
[[File:Multi-agent system structure.png|thumb|Figure 3: Multi-agent system structure [3]]]&lt;br /&gt;
3. centralised,Decentralised and hybrid control structure.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localisation and obstacle avoidance were performed for a multi robot system.The technique of image processing has been used for doing obstacle avoidance and path planning.For image processing a sky view image has been taken for the environment and then a threshold image is created converted to binary to be read by Matlab. Further the image was inflated in order to use during path planning so that the robots travel at a safe distance from the obstacles.&lt;br /&gt;
For path planning of the robots we have used Probabilistic Road Mapping, which follows a graph search algorithm and then chooses the shortest path to travel avoiding obstacles. For the communication we have used the technique of storing the data into a module then all the robots extract the information from That module.  &lt;br /&gt;
&lt;br /&gt;
2. MATLAB and robot Simulation:&lt;br /&gt;
The system was tested in three different methods. First the controller was made in matlab and its was tested that the output given is the same as what is expected . This was performed using Simulink and matlab interface . In addition to that IMU and encoder were used on the robots which were used to determine the location and check it for errors. The data sharing between matlab and Arduino that was on robot is through TCP.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) Virtual Platform&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
The functions of obstacle have been achieved in the project . The path planning works choosing the shortest path making the system good in performance .&lt;br /&gt;
[[File:Screen Shot 2022-06-07 at 6.57.59 pm.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As seen in the figure above the path planning is working exactly the way we want it to and we get the desired results. The robots are given the target coordinates and they reach their target location.The figure below shows the robots at their initial position on the left and final position on the right.The robots complete this task completely autonomously once the desired coordinates have been given to them.&lt;br /&gt;
[[File:Dfsdfd.png|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
While the robots are in motion they share their current coordinate among the robots.The results were displayed in the command window in order to check for the user . The snippet shows the coordinate data used for data verification.&lt;br /&gt;
[[File:Coord.png|frame|center]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
In the two figures below Red dashed line shows the red dashed line is the set motion trajectory and the blue solid line is the actual motion trajectory resulted from the movement of the robot.It can be seen in the figure below that when only encoder is used then there is a large deviation from the desired results.&lt;br /&gt;
[[File:Encoder.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Fusion.jpg|frame|center]] &amp;lt;br&amp;gt;&lt;br /&gt;
In the above figure it can been seen that there is a very less deviation from the desired results during robot localisation .Hence it can be said that using sensor fusion gave much better results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
For virtual platform functions of obstacle avoidance, path planning and communication were achieved. All three robots could be seen in motion at the same time and then sharing coordinates among them . For physical robot testing the function of robot localisation was achieved .&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
In the performance testing using simulations improvements can be made to run it more smoothly . For physical work the communication module and obstacle avoidance has to be developed to make the system perform much better.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Multi-agent_system_structure.png&amp;diff=17843</id>
		<title>File:Multi-agent system structure.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Multi-agent_system_structure.png&amp;diff=17843"/>
		<updated>2022-06-07T13:47:17Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Picture of Multi-agent system structure&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Agent_structure.jpg&amp;diff=17842</id>
		<title>File:Agent structure.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Agent_structure.jpg&amp;diff=17842"/>
		<updated>2022-06-07T13:45:23Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Picture of Agent structure&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Autonomous_system_applications.png&amp;diff=17841</id>
		<title>File:Autonomous system applications.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Autonomous_system_applications.png&amp;diff=17841"/>
		<updated>2022-06-07T13:44:35Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Picture of Autonomous system applications (https://www.sohu.com/a/405012983_391530)&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17715</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17715"/>
		<updated>2022-06-07T03:01:01Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
2.MATLAB and robot Simulation:&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
will be adding pictures and methods in achieving the results.&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&amp;lt;big&amp;gt;Performance testing on virtual platforms&amp;lt;/big&amp;gt;&lt;br /&gt;
*Obstacle Detection&lt;br /&gt;
*Image Processing&lt;br /&gt;
*Path Planning&lt;br /&gt;
*Bidirectional Communication&lt;br /&gt;
&amp;lt;big&amp;gt;Performance testing on physical platforms&amp;lt;/big&amp;gt;&lt;br /&gt;
*Robot Localization&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
*Physical Robot Testing&lt;br /&gt;
*Performance Improvement in Simulations&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17714</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17714"/>
		<updated>2022-06-07T02:52:24Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
2.MATLAB and robot Simulation:&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&amp;lt;big&amp;gt;Performance testing on virtual platforms&amp;lt;/big&amp;gt;&lt;br /&gt;
*Obstacle Detection&lt;br /&gt;
*Image Processing&lt;br /&gt;
*Path Planning&lt;br /&gt;
*Bidirectional Communication&lt;br /&gt;
&amp;lt;big&amp;gt;Performance testing on physical platforms&amp;lt;/big&amp;gt;&lt;br /&gt;
*Robot Localization&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Future Work&amp;lt;/big&amp;gt;&lt;br /&gt;
*Physical Robot Testing&lt;br /&gt;
*Performance Improvement in Simulations&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17713</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17713"/>
		<updated>2022-06-07T02:52:11Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
2.MATLAB and robot Simulation:&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&amp;lt;big&amp;gt;Performance testing on virtual platforms&amp;lt;/big&amp;gt;&lt;br /&gt;
*Obstacle Detection&lt;br /&gt;
*Image Processing&lt;br /&gt;
*Path Planning&lt;br /&gt;
*Bidirectional Communication&lt;br /&gt;
&amp;lt;big&amp;gt;Performance testing on physical platforms&amp;lt;/big&amp;gt;&lt;br /&gt;
*Robot Localization&lt;br /&gt;
&lt;br /&gt;
Future Work&lt;br /&gt;
*Physical Robot Testing&lt;br /&gt;
*Performance Improvement in Simulations&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17712</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17712"/>
		<updated>2022-06-07T02:51:18Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
2.MATLAB and robot Simulation:&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
#Performance testing on virtual platforms&lt;br /&gt;
*Obstacle Detection&lt;br /&gt;
*Image Processing&lt;br /&gt;
*Path Planning&lt;br /&gt;
*Bidirectional Communication&lt;br /&gt;
#Performance testing on physical platforms&lt;br /&gt;
*Robot Localization&lt;br /&gt;
&lt;br /&gt;
Future Work&lt;br /&gt;
*Physical Robot Testing&lt;br /&gt;
*Performance Improvement in Simulations&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17711</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17711"/>
		<updated>2022-06-07T02:46:20Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
2.MATLAB and robot Simulation:&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17710</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17710"/>
		<updated>2022-06-07T02:41:55Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# TCP/IP connection&lt;br /&gt;
# askID&lt;br /&gt;
# getCoordinate&lt;br /&gt;
# setCoordinate&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17709</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17709"/>
		<updated>2022-06-07T02:40:25Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;lt;br&amp;gt;&lt;br /&gt;
For Real simulation, the results is that three rovers are able to work together in completing targeted coordinates which include 4 functions.&amp;lt;br&amp;gt;&lt;br /&gt;
# Numbered list item TCP/IP connection&lt;br /&gt;
# Numbered list item askID&lt;br /&gt;
# Numbered list item getCoordinate&lt;br /&gt;
# Numbered list item setCoordinate&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17708</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17708"/>
		<updated>2022-06-07T01:59:22Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For V-REP simulation, the results is that the project&amp;#039;s team members were able to create a multi-rovers that are working together in V-Rep scene.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17706</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17706"/>
		<updated>2022-06-06T05:35:10Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17705</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17705"/>
		<updated>2022-06-06T05:34:52Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;1) V-REP simulation&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
2) Real-hardware simulation&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17704</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17704"/>
		<updated>2022-06-05T12:52:36Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17703</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=17703"/>
		<updated>2022-06-05T12:52:16Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Objectives */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s2|FYP 63333]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Robot technology is widely used in many different industries today to perform a wide variety of tasks.Robots are used in agricultural, food technology, manufacturing, military, automobile and many more industries.During the recent years lot of research has been put into the area of cooperative multi-agent robotic system.The main purpose is to find to a feasible solution for the optimal communication among the agents.&lt;br /&gt;
&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Ruixiang Meng&lt;br /&gt;
* Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
* Sherif Bhalla&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Prof. Peng Shi&lt;br /&gt;
* Prof. Cheng-Chew Lim&lt;br /&gt;
==== Technical Advisors ====&lt;br /&gt;
*Yang Fei&lt;br /&gt;
*Yuan Sun&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Design a cooperative multi-robot system with a Simulink-with-Arduino based approach&lt;br /&gt;
Design a scene where it proves multi-robot system is working together while performing obstacles avoidance/path-planning/communication/object detection using interface of coopeliasim(V-Rep) and MATLAB&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== What is Autonomous systems? ===&lt;br /&gt;
An autonomous system can sense, perceive, plan, and act without intervention.&lt;br /&gt;
* Able to gain information about the environment&lt;br /&gt;
* Able to work without human intervention&lt;br /&gt;
* Able to perform self-maintenance [1]&lt;br /&gt;
=== The concept of ‘Agent&amp;#039; ===&lt;br /&gt;
* “An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through effectors”[2].&lt;br /&gt;
=== Multi-agent system ===&lt;br /&gt;
* Work cooperatively instead of working individually&lt;br /&gt;
* Communication among individual agents&lt;br /&gt;
=== What is the cooperative multi-robot system? ===&lt;br /&gt;
* A group of robots that seek to achieve a collective task as a team. &lt;br /&gt;
* Each individual robot makes decisions based on available local information as well as limited communications with neighboring robots.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The system was tested in two different methods. They are mentioned as below :&lt;br /&gt;
&lt;br /&gt;
1.VREP and Matlab Simulation:&lt;br /&gt;
In this VREP has been used as the simulation environment and remote api connection with matlab has been used for giving the input. Path planning ,Robot localization and obstacle avoidance were performed for a multi robot system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] M. Parashar and S. Hariri, &amp;quot;Autonomic Computing: An Overview&amp;quot;, 2005. Available: 10.1.1.62.2957.&lt;br /&gt;
&lt;br /&gt;
[2] M. Glavic, &amp;quot;Agents and Multi-Agent Systems: A Short Introduction for Power Engineers&amp;quot;, University of Liege, Luik, 2006.&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=16448</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=16448"/>
		<updated>2021-09-18T16:40:36Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Team Members:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Ruixiang Meng,&lt;br /&gt;
Muhammad Haniff Fauzan Bin Derani,&lt;br /&gt;
Sherif Bhalla&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Supervised by：&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Prof. Peng Shi,&lt;br /&gt;
Prof. Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Technical Advisors:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Yang Fei,&lt;br /&gt;
Yuan Sun&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=16447</id>
		<title>Projects:2021s2-63333 Cooperative multi-robot system: a Simulink-with-Arduino based approach</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s2-63333_Cooperative_multi-robot_system:_a_Simulink-with-Arduino_based_approach&amp;diff=16447"/>
		<updated>2021-09-18T16:39:29Z</updated>

		<summary type="html">&lt;p&gt;A1789638: Created page with &amp;quot;&amp;#039;&amp;#039;&amp;#039;Team Members:&amp;#039;&amp;#039;&amp;#039; Ruixiang Meng Muhammad Haniff Fauzan Bin Derani Sherif Bhalla  &amp;#039;&amp;#039;&amp;#039;Supervised by：&amp;#039;&amp;#039;&amp;#039; Prof. Peng Shi Prof. Cheng-Chew Lim  &amp;#039;&amp;#039;&amp;#039;Technical Advisors:&amp;#039;&amp;#039;&amp;#039;...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Team Members:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Ruixiang Meng&lt;br /&gt;
Muhammad Haniff Fauzan Bin Derani&lt;br /&gt;
Sherif Bhalla&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Supervised by：&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Prof. Peng Shi&lt;br /&gt;
Prof. Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Technical Advisors:&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
Yang Fei&lt;br /&gt;
Yuan Sun&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15162</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15162"/>
		<updated>2020-09-20T04:19:51Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
* Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
* Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
* Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
* Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (Shool of EEE, University of Adelaide)&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
== Method == &lt;br /&gt;
== Results ==&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
== Referrences ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15161</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15161"/>
		<updated>2020-09-20T04:19:37Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Supervisors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
* Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
* Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
* Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
* Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (Shool of EE, University of Adelaide)&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
== Method == &lt;br /&gt;
== Results ==&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
== Referrences ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15160</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15160"/>
		<updated>2020-09-20T04:10:39Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
* Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
* Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
* Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
* Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
== Method == &lt;br /&gt;
== Results ==&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
== Referrences ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15159</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15159"/>
		<updated>2020-09-20T04:10:02Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
* Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
* Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
* Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
* Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
== Method == &lt;br /&gt;
== Results ==&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
== Referrences ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15150</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15150"/>
		<updated>2020-09-19T15:51:36Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Project Aims */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
* Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
* Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
* Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
* Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15149</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15149"/>
		<updated>2020-09-19T15:50:59Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Project Aims */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
* Bulleted list item&lt;br /&gt;
Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15148</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15148"/>
		<updated>2020-09-19T15:50:24Z</updated>

		<summary type="html">&lt;p&gt;A1789638: /* Project Team */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2020s2|7410]]&lt;br /&gt;
&lt;br /&gt;
An increasing number of applications require the joint use of signal processing and AI techniques on time series and sensor data. These techniques can be used for the reduction of noises such as air conditioning, computer fan, or environmentally generated noises such as in a street, an airport, in a metro station, or in an airplane cockpit. Developing AI models for signal obtained from a variety of situations as exemplified above is not trivial, but these have been attempted using Recurrent and Convolutional Networks such as Speech Enhanced Generative Adversarial Neural networks (SEGAN).&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
=== Project Aims ===&lt;br /&gt;
&lt;br /&gt;
Use HARK with PyKALDI on the High-Performance Computer.&lt;br /&gt;
&lt;br /&gt;
Develop an algorithm using HARK for noise processing on HPC.&lt;br /&gt;
&lt;br /&gt;
Evaluate the performance of HARK relative to a number of noise types.&lt;br /&gt;
&lt;br /&gt;
Perform speaker identification using PyKALDI on HPC.&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15115</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15115"/>
		<updated>2020-09-14T12:07:11Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Introduction&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Project Team&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Students&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Supervisors&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Abstract&amp;#039;&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15114</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15114"/>
		<updated>2020-09-14T12:06:55Z</updated>

		<summary type="html">&lt;p&gt;A1789638: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;#039;&amp;#039;&amp;#039;Introduction&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Project Team&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Students&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
* Muhammad Haniff Derani&lt;br /&gt;
* Shuyang Shen&lt;br /&gt;
Supervisors &lt;br /&gt;
* Dr. Said Al-Sarawi (&lt;br /&gt;
* Dr. Ahmad Hashemi-Sakhtsari (DST Group)&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Abstract&amp;#039;&amp;#039;&amp;#039;&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15111</id>
		<title>Projects:2020s2-7410 Speech Enhancement for Automatic Speech Recognition</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s2-7410_Speech_Enhancement_for_Automatic_Speech_Recognition&amp;diff=15111"/>
		<updated>2020-09-14T12:00:27Z</updated>

		<summary type="html">&lt;p&gt;A1789638: Created page with &amp;quot;hhhhhhhhhhhh&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;hhhhhhhhhhhh&lt;/div&gt;</summary>
		<author><name>A1789638</name></author>
		
	</entry>
</feed>