<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1132039</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1132039"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1132039"/>
	<updated>2026-04-25T11:37:47Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-24_AI_Agent_Development_for_an_Autonomous_Trash_Collecting_Robot&amp;diff=1452</id>
		<title>Projects:2014S1-24 AI Agent Development for an Autonomous Trash Collecting Robot</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-24_AI_Agent_Development_for_an_Autonomous_Trash_Collecting_Robot&amp;diff=1452"/>
		<updated>2014-10-29T04:24:37Z</updated>

		<summary type="html">&lt;p&gt;A1132039: Add VRDE images&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Information ==&lt;br /&gt;
&lt;br /&gt;
The goal of this project was to create a software environment for the development of Artificial Intelligence [AI] agents, and an agent to operate in it.&lt;br /&gt;
&lt;br /&gt;
The creation of this agent facilitates an exploration of the Soar architecture, and helps promote the an understanding of its technical challenges and what kind of problems it may be suited to solving. The virtual environment allows the development of this agent in ideal conditions, where the complications of designing and interacting with hardware are abstracted away.&lt;br /&gt;
&lt;br /&gt;
=== Environment ===&lt;br /&gt;
&lt;br /&gt;
The software environment is comprised of two main components, the Virtual Robot Development Environment [VRDE], and the Communication Layer [CL].&lt;br /&gt;
&lt;br /&gt;
==== Virtual Robot Development Environment ====&lt;br /&gt;
&lt;br /&gt;
The VRDE is created in the Unity3D game engine. It contains a single, user-controllable robot, which can be driven around, pick up loose cans, and deposit them in bins.&lt;br /&gt;
&lt;br /&gt;
[[File:G24VRDELevel.png|600px|Top-down view of the test level]]&lt;br /&gt;
&lt;br /&gt;
The robot may be instructed to move, rotate, and halt, as well as pick up and discard items. These actions can be driven by a user via the keyboard, or by an AI Agent, written in Soar.&lt;br /&gt;
&lt;br /&gt;
To allow the AI to make appropriate decisions, the robot contains a number of sensors to feed it data about the virtual environment. These sensors are designed to mimic physically realisable sensors, but simplified to allow the AI to easily use them. The currently implemented sensors are:&lt;br /&gt;
&lt;br /&gt;
* Camera - Notifies the AI of any can or bin in sight, sending the name, range, and bearing of the object of interest.&lt;br /&gt;
* Compass - Constantly sends the robot&amp;#039;s current orientation, relative to its spawn orientation&lt;br /&gt;
* Distance - Sends the total distance moved for the move operation in progress&lt;br /&gt;
* Rotation - Sends the total degrees rotated for the rotate operation in progress&lt;br /&gt;
* Collision - Act like 8 switches surrounding the robot, which notify the AI whenever they are triggered by the environment.&lt;br /&gt;
* Movement Status - Denotes whether the robot is currently executing a move or rotate operation.&lt;br /&gt;
* Arm Status - Denotes whether the arm is currently holding a can.&lt;br /&gt;
&lt;br /&gt;
[[File:G24RobotCamera.png|600px|Robot camera testing visibility and line of sight]]&lt;br /&gt;
&lt;br /&gt;
This collection of sensors is sufficient to allow the AI to perform its current task of hunting down loose cans, and depositing them in bins. They also allow for potential future AI goals, such as mapping the position of bins for more efficient disposal.&lt;br /&gt;
&lt;br /&gt;
The VRDE has a well maintained codebase, test suite, documentation and a developer wiki.&lt;br /&gt;
&lt;br /&gt;
==== Communication Layer ====&lt;br /&gt;
&lt;br /&gt;
The Communication Layer is written in Java, and is used to allow the VRDE to communicate with the Soar AI Agent via the JSoar implementation.&lt;br /&gt;
&lt;br /&gt;
The CL attaches event listeners to certain output links in the AI agent, which allows the agent to send instructions by writing to these links. The CL writes sensor data from the VRDE onto input links in the agent&amp;#039;s memory.&lt;br /&gt;
&lt;br /&gt;
The CL and VRDE use JavaScript Object Notation [JSON] to communicate. All instructions, network messges, and sensor data are written to simple JSON structures, and sent over a socket connection between the two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== AI Agent ===&lt;br /&gt;
&lt;br /&gt;
The AI agent is created in Soar which a unified architecture particullarly used for developing intelligent systems. Soar is a cognitive architecture, which allows modelling of certain aspects of human cognition, such as memory and reward systems. The agent uses the sensor data from the VRDE to allow it to make decisions.&lt;br /&gt;
&lt;br /&gt;
The agent has the following goal hierarchy:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Search-for-can&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is not holding a can, and there is no can detected with the camera, it will perform the following:&lt;br /&gt;
&lt;br /&gt;
# rotate the robot for 360 degree for scanning&lt;br /&gt;
# choose a random direction&lt;br /&gt;
# move a random distance&lt;br /&gt;
&lt;br /&gt;
Desired state: the camera sensor reports a can is visible&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Move-to-can&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is not holding a can, and there is a can detected with the camera but it is not reachable with the arm, it will:&lt;br /&gt;
&lt;br /&gt;
# rotate to the direction where the can is&lt;br /&gt;
# move to the can until the arm can reach the can&lt;br /&gt;
Desired state: the camera sensor reports a can is reachable&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Pick-up-can&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is not holding a can, there is a can detected with the camera, and it is reachable with the arm, it, it will pick up the can.&lt;br /&gt;
&lt;br /&gt;
Desired state: the arm status sensor reports the robot is holding a can&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Search-for-bin&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is holding a can, and there is no bin detected with the camera, it will:&lt;br /&gt;
&lt;br /&gt;
# rotate the robot for 360 degree for scanning&lt;br /&gt;
# choose a random direction&lt;br /&gt;
# move a random distance&lt;br /&gt;
&lt;br /&gt;
Desired state: the camera sensor reports a bin is visible&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Move-to-bin&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is holding a can, and there is a bin detected with the camera but it is not reachable with the arm, it will:&lt;br /&gt;
&lt;br /&gt;
# rotate to the direction where the bin is&lt;br /&gt;
# move to the can until the arm can reach the bin&lt;br /&gt;
&lt;br /&gt;
Desired state: the camera sensor reports a bin is reachable&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Discard-in-bin&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is holding a can, there is a bin detected with the camera, and it is reachable with the arm, then discard the can in the bin.&lt;br /&gt;
&lt;br /&gt;
Desired state: the arm status sensor reports the robot is not holding a can anymore&lt;br /&gt;
&lt;br /&gt;
=== Outcomes ===&lt;br /&gt;
This project succeeded in its goal of creating a virtual environment in which a Soar AI agent can operate. It provides an appropriate platform to create and explore Soar agents for autonomous vehicles. The AI agent delivered proves that the system is capable of interacting with an agent, and providing it enough information to make decisions.&lt;br /&gt;
&lt;br /&gt;
== Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
* David Reece&lt;br /&gt;
* Shuangsheng Liu&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
* Dr Braden Phillips&lt;br /&gt;
* Dr Brian Ng&lt;/div&gt;</summary>
		<author><name>A1132039</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:G24RobotCamera.png&amp;diff=1449</id>
		<title>File:G24RobotCamera.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:G24RobotCamera.png&amp;diff=1449"/>
		<updated>2014-10-29T04:18:33Z</updated>

		<summary type="html">&lt;p&gt;A1132039: The robot&amp;#039;s camera, which consists of a trigger zone, and line of sight raycasts.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The robot&amp;#039;s camera, which consists of a trigger zone, and line of sight raycasts.&lt;/div&gt;</summary>
		<author><name>A1132039</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:G24VRDELevel.png&amp;diff=1448</id>
		<title>File:G24VRDELevel.png</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:G24VRDELevel.png&amp;diff=1448"/>
		<updated>2014-10-29T04:16:47Z</updated>

		<summary type="html">&lt;p&gt;A1132039: Overview of the VRDE demo level&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Overview of the VRDE demo level&lt;/div&gt;</summary>
		<author><name>A1132039</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-24_AI_Agent_Development_for_an_Autonomous_Trash_Collecting_Robot&amp;diff=1137</id>
		<title>Projects:2014S1-24 AI Agent Development for an Autonomous Trash Collecting Robot</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-24_AI_Agent_Development_for_an_Autonomous_Trash_Collecting_Robot&amp;diff=1137"/>
		<updated>2014-10-25T10:50:52Z</updated>

		<summary type="html">&lt;p&gt;A1132039: Initial Page Created&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Project Information ==&lt;br /&gt;
&lt;br /&gt;
The goal of this project was to create a software environment for the development of Artificial Intelligence [AI] agents, and an agent to operate in it.&lt;br /&gt;
&lt;br /&gt;
The creation of this agent facilitates an exploration of the Soar architecture, and helps promote the an understanding of its technical challenges and what kind of problems it may be suited to solving. The virtual environment allows the development of this agent in ideal conditions, where the complications of designing and interacting with hardware are abstracted away.&lt;br /&gt;
&lt;br /&gt;
=== Environment ===&lt;br /&gt;
&lt;br /&gt;
The software environment is comprised of two main components, the Virtual Robot Development Environment [VRDE], and the Communication Layer [CL].&lt;br /&gt;
&lt;br /&gt;
==== Virtual Robot Development Environment ====&lt;br /&gt;
&lt;br /&gt;
The VRDE is created in the Unity3D game engine. It contains a single, user-controllable robot, which can be driven around, pick up loose cans, and deposit them in bins.&lt;br /&gt;
&lt;br /&gt;
The robot may be instructed to move, rotate, and halt, as well as pick up and discard items. These actions can be driven by a user via the keyboard, or by an AI Agent, written in Soar.&lt;br /&gt;
&lt;br /&gt;
To allow the AI to make appropriate decisions, the robot contains a number of sensors to feed it data about the virtual environment. These sensors are designed to mimic physically realisable sensors, but simplified to allow the AI to easily use them. The currently implemented sensors are:&lt;br /&gt;
&lt;br /&gt;
* Camera - Notifies the AI of any can or bin in sight, sending the name, range, and bearing of the object of interest.&lt;br /&gt;
* Compass - Constantly sends the robot&amp;#039;s current orientation, relative to its spawn orientation&lt;br /&gt;
* Distance - Sends the total distance moved for the move operation in progress&lt;br /&gt;
* Rotation - Sends the total degrees rotated for the rotate operation in progress&lt;br /&gt;
* Collision - Act like 8 switches surrounding the robot, which notify the AI whenever they are triggered by the environment.&lt;br /&gt;
* Movement Status - Denotes whether the robot is currently executing a move or rotate operation.&lt;br /&gt;
* Arm Status - Denotes whether the arm is currently holding a can.&lt;br /&gt;
&lt;br /&gt;
This collection of sensors is sufficient to allow the AI to perform its current task of hunting down loose cans, and depositing them in bins. They also allow for potential future AI goals, such as mapping the position of bins for more efficient disposal.&lt;br /&gt;
&lt;br /&gt;
The VRDE has a well maintained codebase, test suite, documentation and a developer wiki.&lt;br /&gt;
&lt;br /&gt;
==== Communication Layer ====&lt;br /&gt;
&lt;br /&gt;
The Communication Layer is written in Java, and is used to allow the VRDE to communicate with the Soar AI Agent via the JSoar implementation.&lt;br /&gt;
&lt;br /&gt;
The CL attaches event listeners to certain output links in the AI agent, which allows the agent to send instructions by writing to these links. The CL writes sensor data from the VRDE onto input links in the agent&amp;#039;s memory.&lt;br /&gt;
&lt;br /&gt;
The CL and VRDE use JavaScript Object Notation [JSON] to communicate. All instructions, network messges, and sensor data are written to simple JSON structures, and sent over a socket connection between the two.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== AI Agent ===&lt;br /&gt;
&lt;br /&gt;
The AI agent is created in Soar which a unified architecture particullarly used for developing intelligent systems. Soar is a cognitive architecture, which allows modelling of certain aspects of human cognition, such as memory and reward systems. The agent uses the sensor data from the VRDE to allow it to make decisions.&lt;br /&gt;
&lt;br /&gt;
The agent has the following goal hierarchy:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Search-for-can&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is not holding a can, and there is no can detected with the camera, it will perform the following:&lt;br /&gt;
&lt;br /&gt;
# rotate the robot for 360 degree for scanning&lt;br /&gt;
# choose a random direction&lt;br /&gt;
# move a random distance&lt;br /&gt;
&lt;br /&gt;
Desired state: the camera sensor reports a can is visible&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Move-to-can&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is not holding a can, and there is a can detected with the camera but it is not reachable with the arm, it will:&lt;br /&gt;
&lt;br /&gt;
# rotate to the direction where the can is&lt;br /&gt;
# move to the can until the arm can reach the can&lt;br /&gt;
Desired state: the camera sensor reports a can is reachable&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Pick-up-can&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is not holding a can, there is a can detected with the camera, and it is reachable with the arm, it, it will pick up the can.&lt;br /&gt;
&lt;br /&gt;
Desired state: the arm status sensor reports the robot is holding a can&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Search-for-bin&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is holding a can, and there is no bin detected with the camera, it will:&lt;br /&gt;
&lt;br /&gt;
# rotate the robot for 360 degree for scanning&lt;br /&gt;
# choose a random direction&lt;br /&gt;
# move a random distance&lt;br /&gt;
&lt;br /&gt;
Desired state: the camera sensor reports a bin is visible&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Move-to-bin&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is holding a can, and there is a bin detected with the camera but it is not reachable with the arm, it will:&lt;br /&gt;
&lt;br /&gt;
# rotate to the direction where the bin is&lt;br /&gt;
# move to the can until the arm can reach the bin&lt;br /&gt;
&lt;br /&gt;
Desired state: the camera sensor reports a bin is reachable&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Discard-in-bin&amp;#039;&amp;#039; - &lt;br /&gt;
If the robot is holding a can, there is a bin detected with the camera, and it is reachable with the arm, then discard the can in the bin.&lt;br /&gt;
&lt;br /&gt;
Desired state: the arm status sensor reports the robot is not holding a can anymore&lt;br /&gt;
&lt;br /&gt;
=== Outcomes ===&lt;br /&gt;
This project succeeded in its goal of creating a virtual environment in which a Soar AI agent can operate. It provides an appropriate platform to create and explore Soar agents for autonomous vehicles. The AI agent delivered proves that the system is capable of interacting with an agent, and providing it enough information to make decisions.&lt;br /&gt;
&lt;br /&gt;
== Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
* David Reece&lt;br /&gt;
* Shuangsheng Liu&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
* Dr Braden Phillips&lt;br /&gt;
* Brian Ng&lt;br /&gt;
* Michael Liebelt&lt;/div&gt;</summary>
		<author><name>A1132039</name></author>
		
	</entry>
</feed>