<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1669943</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1669943"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1669943"/>
	<updated>2026-05-16T15:13:11Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15835</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15835"/>
		<updated>2020-10-19T12:23:12Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Camera */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
=== Camera ===&lt;br /&gt;
The target object is a red sphere. Therefore, to detect this object, image segmentation which removes all colours other than red is used in order to locate the blob which represents the target in the image. Once this blob is detected, it is labelled with a bounding box and a centroid which describe the region of the image containing the target. These labels contain the pixel locations of interest, which are used to orient the rover so it is facing the target, and also used to estimate the distance of the target object from the camera. Using the distance of the object from the camera obtained by computer vision, and the orientation and position of the rover in the global coordinate frame, the location of the target is transformed from the rover frame to the global frame via multiplication with a Homogenous Transform matrix. These coordinates are given to A* Path Planning and the robot arm for arrival and pickup of the target.&lt;br /&gt;
&lt;br /&gt;
The rovers have a green body, and are similarly detected using this colour feature. When the rover body is obstructed by one of its wheels, the two blobs are merged and the centre of the rover is accurately estimated. Rovers use detection of each other to initialise their cartesian coordinate systems from a local frame to a global frame.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== Map Generating ===&lt;br /&gt;
This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|environment features collected by sensor]]&lt;br /&gt;
&lt;br /&gt;
=== Path Planning ===&lt;br /&gt;
The map provides agents with obstacle location information, which they can use to plan paths with an A* Search getting the destination by optimal path with collisions avoidance.&lt;br /&gt;
[[File:Astar.jpg|thumb|center|A_star planed path for rover to get to target from current location]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM === &lt;br /&gt;
&lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb|center|Ground truth position in the simulation]]&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb|center|EKF predicting position]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15832</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15832"/>
		<updated>2020-10-19T12:19:03Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Camera */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
=== Camera ===&lt;br /&gt;
The target object is a red sphere. Therefore, to detect this object, image segmentation which removes all colours other than red is used in order to locate the blob which represents the target in the image. Once this blob is detected, it is labelled with a bounding box and a centroid which describe the region of the image containing the target. These labels contain the pixel locations of interest, which are used to orient the rover so it is facing the target, and also used to estimate the distance of the target object from the camera.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== Map Generating ===&lt;br /&gt;
This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|environment features collected by sensor]]&lt;br /&gt;
&lt;br /&gt;
=== Path Planning ===&lt;br /&gt;
The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.&lt;br /&gt;
[[File:Astar.jpg|thumb|center|]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM === &lt;br /&gt;
&lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb|center|Ground truth position in the simulation]]&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb|center|EKF predicting position]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15820</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15820"/>
		<updated>2020-10-19T12:12:53Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
=== Camera ===&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== Map Generating ===&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
=== Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM === &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15819</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15819"/>
		<updated>2020-10-19T12:12:27Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
=== Camera ===&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== Map Generating ===&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
=== Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM === &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.&lt;br /&gt;
File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15817</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15817"/>
		<updated>2020-10-19T12:12:15Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
=== Camera ===&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== Map Generating ===&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
=== Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM === &lt;br /&gt;
&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:TargetDetection.jpg|thumb|500px|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.&lt;br /&gt;
File:DistanceTarget.jpg|thumb|500px|center|A plot of the distance vs. Bounding Box height for detecting target objects.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15815</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15815"/>
		<updated>2020-10-19T12:11:43Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
=== Camera ===&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== Map Generating ===&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
=== Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM === &lt;br /&gt;
&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.&lt;br /&gt;
File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15812</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15812"/>
		<updated>2020-10-19T11:56:34Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position&lt;br /&gt;
File:EKF TRUTH.png|thumb|300px|thumb||Ground truth position in the simulation&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
One method of measuring distance of the target from the camera that was implemented was by using a RGB-D camera. Once a blob is detected, it is labelled with a centroid and a bounding box. The pixel location of the centroid is taken and used as an index for the depth map, which returns a distance value.&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15809</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15809"/>
		<updated>2020-10-19T11:51:06Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position]] &lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb| Ground truth position in the simulation]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15808</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15808"/>
		<updated>2020-10-19T11:50:31Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position]] &lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb| Ground truth position in the simulation]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|center|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|center|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|center|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15807</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15807"/>
		<updated>2020-10-19T11:48:38Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position]] &lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb| Ground truth position in the simulation]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]][[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15805</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15805"/>
		<updated>2020-10-19T11:48:01Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb||EKF predicting position]] &lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb| Ground truth position in the simulation]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.][File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15803</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15803"/>
		<updated>2020-10-19T11:47:22Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|300px|thumb|right|EKF predicting position]] &lt;br /&gt;
[[File:EKF TRUTH.png|thumb|300px|thumb|right| Ground truth position in the simulation]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15801</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15801"/>
		<updated>2020-10-19T11:45:35Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb|100px|thumb|left|EKF predicting position]] &lt;br /&gt;
[[File:EKF TRUTH.png|thumb|100px|thumb|right| Ground truth position in the simulation]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|center|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15799</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15799"/>
		<updated>2020-10-19T11:44:50Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]]|100px|thumb|left|EKF predicting position &lt;br /&gt;
[[File:EKF TRUTH.png|thumb]]|100px|thumb|right| Ground truth position in the simulation&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15798</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15798"/>
		<updated>2020-10-19T11:44:40Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]]|100px|thumb|left|EKF predicting position &lt;br /&gt;
[[File:EKF TRUTH.png|thumb]]|100px|thumb|right| Ground truth position in the simulation&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15796</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15796"/>
		<updated>2020-10-19T11:44:24Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]]| EKF&lt;br /&gt;
[[File:EKF TRUTH.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:Astar.jpg|thumb|center|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15795</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15795"/>
		<updated>2020-10-19T11:44:09Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]]| EKF&lt;br /&gt;
[[File:EKF TRUTH.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15794</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15794"/>
		<updated>2020-10-19T11:43:58Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]]| EKF&lt;br /&gt;
[[File:EKF TRUTH.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15793</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15793"/>
		<updated>2020-10-19T11:43:24Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]]| EKF&lt;br /&gt;
[[File:EKF TRUTH.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|center|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15791</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15791"/>
		<updated>2020-10-19T11:42:45Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]] [[File:EKF TRUTH.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15790</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15790"/>
		<updated>2020-10-19T11:42:19Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms &lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
Extended Kalman Filter(EKF) SLAM method is using in this project to predict the current position of the agent &lt;br /&gt;
EKF SLAM basing on the environment features currently detected by sensor, using the maximum likelihood algorithm method to predict the agent current position&lt;br /&gt;
[[File:EKF SLAM.jpg|thumb]] [[File:EKF TRUTH.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|center|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|center|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15786</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15786"/>
		<updated>2020-10-19T11:41:11Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15785</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15785"/>
		<updated>2020-10-19T11:40:49Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== Testing of Fuzzy Logic Controller ===&lt;br /&gt;
The dips in the following graph are caused by the switching of the fuzzy logic to different amplifier values, as error values decreased instead of a slow approach to the endpoint, the response was sped up with a higher amplifier value of the controller constants.&lt;br /&gt;
&lt;br /&gt;
[[File:Fuzzy Logic Test.jpg|500px|thumb|center|Fuzzy Logic Test]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
Depth estimation using only one camera is achieved by identifying the target object, and then measuring the size of the bounding box when the target is at known distances from the camera. Then, this data is plotted and a trendline is fitted using a Power model to obtain an equation which describes pixels vs. distance. The R values are very close to 1, indicating the line is an appropriate fit.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
[[File:DistanceTarget.jpg|thumb|A plot of the distance vs. Bounding Box height for detecting target objects.]]&lt;br /&gt;
[[File:DistanceRover.jpg|thumb|A plot of the distance vs. Bounding Box height for rover detection.]]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:DistanceRover.jpg&amp;diff=15784</id>
		<title>File:DistanceRover.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:DistanceRover.jpg&amp;diff=15784"/>
		<updated>2020-10-19T11:40:44Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Pixels vs Distance for rover detection&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:DistanceTarget.jpg&amp;diff=15782</id>
		<title>File:DistanceTarget.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:DistanceTarget.jpg&amp;diff=15782"/>
		<updated>2020-10-19T11:39:46Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Distance vs Pixels for target detection.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15777</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15777"/>
		<updated>2020-10-19T11:32:53Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetected.jpg|thumb|Rovers use the camera to detect the location of other rovers in the environment. They are coloured green, which is the feature that is extracted to identify the rover.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:RoverDetected.jpg&amp;diff=15776</id>
		<title>File:RoverDetected.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:RoverDetected.jpg&amp;diff=15776"/>
		<updated>2020-10-19T11:31:33Z</updated>

		<summary type="html">&lt;p&gt;A1669943: Rovers can detect other rovers by identifying their green color.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Rovers can detect other rovers by identifying their green color.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15775</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15775"/>
		<updated>2020-10-19T11:30:20Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Target Detection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
[[File:RoverDetection.jpg|thumb|Rovers are coloured green in the simulation environment and can be detected using the camera. The coordinates are accurately generated in metres based on the size of the bounding box.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15774</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15774"/>
		<updated>2020-10-19T11:30:10Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15773</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15773"/>
		<updated>2020-10-19T11:29:39Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
[[File:RoverDetection.jpg|thumb|Rovers are coloured green in the simulation environment and can be detected using the camera. The coordinates are accurately generated in metres based on the size of the bounding box.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:RoverDetection.jpeg&amp;diff=15772</id>
		<title>File:RoverDetection.jpeg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:RoverDetection.jpeg&amp;diff=15772"/>
		<updated>2020-10-19T11:26:51Z</updated>

		<summary type="html">&lt;p&gt;A1669943: Rover is detected by another rover.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Rover is detected by another rover.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15771</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15771"/>
		<updated>2020-10-19T11:25:43Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM and Path Planning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)[11]&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values. The error graphs for the x-direction, y-direction and the angular direction were analysed. The system was given a slow start since the error at the beginning is the largest. The velocity of the rover was analysed at various error levels. When the error values decreased, the fuzzy controller switches to a different amplifier value so that the system is sped up.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
[[File:RoverDetection|thumb|Rovers are coloured green in the simulation environment and can be detected using the camera. The coordinates are accurately generated in metres based on the size of the bounding box.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;br /&gt;
[11] &amp;quot;A* Search Algorithm - GeeksforGeeks&amp;quot;, GeeksforGeeks, 2020. [Online]. Available: https://www.geeksforgeeks.org/a-search-algorithm/. [Accessed: 19- Oct- 2020].&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15766</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15766"/>
		<updated>2020-10-19T11:20:47Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will considering the current cost to the next node and the total cost to the final destination. &lt;br /&gt;
It will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
=== Target Detection ===&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15764</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15764"/>
		<updated>2020-10-19T11:19:35Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
A Star.png |(A* SEARCH ALGORITHM - GEEKSFORGEEKS)&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
A star path planning will provide the optimal path for agent from current location to the destination.&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM and Path Planning ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
[[File:Astar.jpg|thumb|The map provides agents with obstacle location information, which they can use to plan paths with an A* Search to avoid collisions.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Astar.jpg&amp;diff=15763</id>
		<title>File:Astar.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Astar.jpg&amp;diff=15763"/>
		<updated>2020-10-19T11:19:27Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Path Planning&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15761</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15761"/>
		<updated>2020-10-19T11:16:24Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
[[File:A Star Path finding|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
The Kp and Kd values in this example were adjusted such that the response had the shortest rise time while avoiding steady state error and overshoot. The response with the overshoot was caused by both high KP and Kd values. The response with the slowest rise time was due to an extremely low Kd value. A balanced Kp and Kd value set reulted in the response with a quick rise time without causing overshoot.&lt;br /&gt;
[[File:PD Controller Tuning.jpg|500px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
[[File:RealEnvironment.jpg|thumb|The map depicted in the previous figure represents data collected from the environment depicted here.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:RealEnvironment.jpg&amp;diff=15759</id>
		<title>File:RealEnvironment.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:RealEnvironment.jpg&amp;diff=15759"/>
		<updated>2020-10-19T11:11:05Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The map of the environment in the previous figure is a model of the real environment depicted here.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15756</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15756"/>
		<updated>2020-10-19T11:09:25Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* SLAM */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Path Planning  ===&lt;br /&gt;
The path planning method using in project is A star.&lt;br /&gt;
[[File:A Star Path finding|thumb]]&lt;br /&gt;
&lt;br /&gt;
=== Control of 3DOF Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
The system uses a three wheel drive omnidirectional rover platform with three degrees of freedom (3DOF). The four basic control algorithms were compared in terms of position control to achieve a fast and accurate response for the system.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Fuzzy Controller ===&lt;br /&gt;
&lt;br /&gt;
To improve the response of the rover further, a fuzzy controller is designed to restrict acceleration when the error is high and to speed up the response at lower error values.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Testing of Control Algorithms ===&lt;br /&gt;
&lt;br /&gt;
[[File:PD Controller Tuning.jpg|10px|thumb|center|PD Controller Tuning]]&lt;br /&gt;
&lt;br /&gt;
=== SLAM ===&lt;br /&gt;
[[File:An Agent Based System for Target Search and Delivery System Overview.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
[[File:EnvironmnetMap.jpg|thumb|This is a map that was generated during a simulation run. Obstacles are accurately detecting using laser range sensors, and their positions in global coordinates are stored in the map as red dots.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:EnvironmnetMap.jpg&amp;diff=15754</id>
		<title>File:EnvironmnetMap.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:EnvironmnetMap.jpg&amp;diff=15754"/>
		<updated>2020-10-19T11:09:14Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Map of the environment.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:EnvironmentMap.jpg&amp;diff=15741</id>
		<title>File:EnvironmentMap.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:EnvironmentMap.jpg&amp;diff=15741"/>
		<updated>2020-10-19T10:52:19Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Data from a simulation run, showing the map of the environment generated by the system.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15736</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15736"/>
		<updated>2020-10-19T10:47:53Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is using MATLAB and V-rep co-simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Control of 3WD Omnidirectional Rover ===&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Rover Design in VREP&lt;br /&gt;
Physical Rover.jpg|Physical Rover Platform &lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
[[File:TargetDetection.jpg|thumb|Targets are successfully detected based on their red colour, and its location in coordinates are generated accurately in the simulation.]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:TargetDetection.jpg&amp;diff=15735</id>
		<title>File:TargetDetection.jpg</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:TargetDetection.jpg&amp;diff=15735"/>
		<updated>2020-10-19T10:47:45Z</updated>

		<summary type="html">&lt;p&gt;A1669943: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On the left-most image, the image is acquired using a vision sensor and the red colour is detected and labelled with a bounding box and a centroid. The middle image shows the binary version of the image with only the target highlighted. The right image shows the depth map detected by the depth sensor, which gives us distance.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15730</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15730"/>
		<updated>2020-10-19T10:39:38Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or more of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is combining MATLAB and V-rep simulation to simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Control of 3WD Omnidirectional Rover ===&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
Rover Design in VREP.jpg|Caption1&lt;br /&gt;
Example.jpg|Caption2&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15728</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=15728"/>
		<updated>2020-10-19T10:38:29Z</updated>

		<summary type="html">&lt;p&gt;A1669943: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Autonomous systems are making their way into our world at a blistering pace due to their potential to make our lives easier. In their most successful applications to date, robots and automation have increased productivity, lowered production costs, and generated job opportunities in the tech sector. Some of the most famous examples of these systems include:&lt;br /&gt;
*Driverless Cars,&lt;br /&gt;
*Autonomous Vacuum Cleaners,&lt;br /&gt;
*Assembly Line Robotic Arms (pictured below)&lt;br /&gt;
&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of. This project also uses Computer Vision for detecting target objects and other rovers, which informs the path planning for the rovers and the arm.&lt;br /&gt;
&lt;br /&gt;
== Motivation ==&lt;br /&gt;
Our aim for this project is to compare single agent systems with multi-agent systems in terms of efficiency in completing identical search and delivery tasks. The system we are building is comprised of multiple robots that must cooperate in an unknown environment to find and retrieve objects of interest. There are two types of robots, a manipulator (which has a mounted robotic arm), and a carrier (which has a flat platform mounted on top of it). The manipulator uses its arm to pick up the target object when it is found and load it onto the carrier. The detection of a target triggers cooperation of this form by the robots. As the task is being carried out, a map of the environment is being created using the data from the robot’s sensors. The sensors being used are a laser range detector and a camera.&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
The method using for this project is combining MATLAB and V-rep simulation to simulating the real life environment.&lt;br /&gt;
&lt;br /&gt;
=== Control of 3WD Omnidirectional Rover ===&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] N. Correll, Introduction to autonomous robots, 1st ed. Boulder, Colorado: Magellan Scientific, p. 17.&lt;br /&gt;
[2] Sariff, N. and Buniyamin, N., 2020. An Overview Of Autonomous Mobile Robot Path Planning Algorithms - IEEE Conference Publication. [online] Ieeexplore.ieee.org. Available at: &amp;lt;https://ieeexplore.ieee.org/abstract/document/4339335&amp;gt; [Accessed 15 April 2020]. &lt;br /&gt;
[3] Anderson, J., Kalra, N., Stanley, K., Sorensen, P., Samaras, C., &amp;amp; Oluwatola, O. (2014). Brief History and Current State of Autonomous Vehicles. In Autonomous Vehicle Technology: A Guide for Policymakers (pp. 55-74). RAND Corporation. Retrieved April 23, 2020, from www.jstor.org/stable/10.7249/j.ctt5hhwgz.11&lt;br /&gt;
[4] &amp;quot;Single agent vs multi agent system in AI.&amp;quot; Geekboots , Feb 12, 2019, . Retrieved April 23, 2020, From https://www.geekboots.com/story/single-agent-vs-multi-agent-system.&lt;br /&gt;
[5] Parikshit H, “Comparison – Centralized, Decentralized and Distributed Systems” GeeksforGeeks, Retrieved April 23, 2020, From https://www.geeksforgeeks.org/comparison-centralized-decentralized-and-distributed-systems/&lt;br /&gt;
[6] Cadena, Cesar &amp;amp; Carlone, Luca &amp;amp; Carrillo, Henry &amp;amp; Latif, Yasir &amp;amp; Scaramuzza, Davide &amp;amp; Neira, Jose &amp;amp; Reid, Ian &amp;amp; Leonard, John. (2016). Simultaneous Localization And Mapping: Present, Future, and the Robust-Perception Age. IEEE Transactions on Robotics. 32. 10.1109/TRO.2016.2624754. &lt;br /&gt;
[7] J. Bobrow, S. Dubowsky and J. Gibson, &amp;quot;Time-Optimal Control of Robotic Manipulators Along Specified Paths&amp;quot;, The International Journal of Robotics Research, vol. 4, no. 3, pp. 3-17, 1985. Available: 10.1177/027836498500400301.&lt;br /&gt;
[8] W. Miller, F. Glanz and L. Kraft, &amp;quot;Application of a General Learning Algorithm to the Control of Robotic Manipulators&amp;quot;, The International Journal of Robotics Research, vol. 6, no. 2, pp. 84-98, 1987. Available: 10.1177/027836498700600207.&lt;br /&gt;
[9] F. Ribeiro, I. Moutinho, P. Silva, C. Fraga and N. Pereira, &amp;quot;Three omni-directional wheels control on a mobile robot,&amp;quot; Control 2004,University of Bath, UK, Sept. 2004.&lt;br /&gt;
[10] W. Li, C. Yang, Y. Jiang, X. Liu and C. Su, &amp;quot;Motion Planning for Omnidirectional Wheeled Mobile Robot by Potential Field Method&amp;quot;, Journal of Advanced Transportation, vol. 2017, pp. 1-11, 2017. Available: 10.1155/2017/4961383.&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=14158</id>
		<title>Projects:2020s1-2330 An Agent-based System for Target Searching and Delivering</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2020s1-2330_An_Agent-based_System_for_Target_Searching_and_Delivering&amp;diff=14158"/>
		<updated>2020-04-23T13:57:22Z</updated>

		<summary type="html">&lt;p&gt;A1669943: Created page with &amp;quot;== Project Students == Muyu Wu, Abrar Ali Chowdhury, Zach Vawser  == Project Supervisors == Professor Peng Shi,  Professor Cheng-Chew Lim  == Project Advisors == Xin Yuan, Y...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Students ==&lt;br /&gt;
Muyu Wu, Abrar Ali Chowdhury, Zach Vawser&lt;br /&gt;
&lt;br /&gt;
== Project Supervisors ==&lt;br /&gt;
Professor Peng Shi,  Professor Cheng-Chew Lim&lt;br /&gt;
&lt;br /&gt;
== Project Advisors ==&lt;br /&gt;
Xin Yuan, Yuan Sun, Yang Fei, Zhi Lian&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
&lt;br /&gt;
In this project, we will be using the concepts in artificial intelligence, control theory and signal processing to create a system in which a team of robots work together through a centralised system to search for, detect, and collect target objects from an unknown environment, then deliver those targets to a destination. The aim of this system will be to test the performance of a single agent system against that of a multi agent system.&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
At the core of an autonomous system is the agent, something with the ability to perceive its environment and act upon it using effectors. There may be one or of these in a system, hence single and multi-agent systems. The primary distinction between these types of systems is that multi-agent ones are dynamic, as agents must handle each other as well as the environment. &lt;br /&gt;
&lt;br /&gt;
The following three concepts are critical to autonomous systems and heavily intertwined. &lt;br /&gt;
First up is mapping, which is the action of an agent using its perceptions to develop a model of the environment it’s in. Localisation is where the agent uses a-priori knowledge of its environment, called a map, to determine where it is in that environment based on the landmarks it can see. With that brief introduction to agents, mapping and localisation, we now reach one of the most fundamental aspects of autonomous systems: doing both at the same time. This technique is called simultaneous localisation and mapping or SLAM, and is needed when an agent is in an environment it has no or very little a-priori knowledge of.&lt;br /&gt;
== Motivation ==&lt;br /&gt;
== Method ==&lt;br /&gt;
== Response ==&lt;/div&gt;</summary>
		<author><name>A1669943</name></author>
		
	</entry>
</feed>