<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1192780</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1192780"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1192780"/>
	<updated>2026-04-15T02:39:50Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=1755</id>
		<title>Projects:2014S1-16 Automatic Sorter using Computer Vision</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=1755"/>
		<updated>2014-10-29T14:34:03Z</updated>

		<summary type="html">&lt;p&gt;A1192780: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2014S1|16]]&lt;br /&gt;
&lt;br /&gt;
The aim of this project is to integrate computer vision with autonomous robotics to sort playing cards. The project is primarily a demonstration tool for potential electronic engineering students to interactively show technology capabilities.&lt;br /&gt;
[[File:board_img.jpg|thumb|Completed Project.]]&lt;br /&gt;
&lt;br /&gt;
== Project information ==&lt;br /&gt;
The objectives of the project identified at the beginning are the following:-&lt;br /&gt;
* Sort a full deck of standard playing cards&lt;br /&gt;
* Use computer vision to differentiate between cards&lt;br /&gt;
*Perform the following sorts:&lt;br /&gt;
**Full Sort&lt;br /&gt;
**Suit Sort&lt;br /&gt;
**Colour Sort &lt;br /&gt;
**Value Sort&lt;br /&gt;
* Have a focus on electrical engineering, particularly image processing and reduce mechanical requirements&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project was tackled by breaking it into four subsections.&lt;br /&gt;
*Computer Vision&lt;br /&gt;
*Robotics&lt;br /&gt;
*Card Sorting Algorithms&lt;br /&gt;
*Graphical User Interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Hardware architecture overview.jpg|thumb|Hardware overview.]]&lt;br /&gt;
The system consists of four hardware components: &lt;br /&gt;
* Camera&lt;br /&gt;
* Laptop&lt;br /&gt;
* Microcontroller&lt;br /&gt;
* Robotic Arm&lt;br /&gt;
Their individual functions are detailed in the image to the side.&lt;br /&gt;
&lt;br /&gt;
== Project Breakdown == &lt;br /&gt;
&lt;br /&gt;
=== Image Processing ===&lt;br /&gt;
&lt;br /&gt;
[[File:Image_Proc_Stages_Wiki.jpg|thumb|Image Processing Block Diagram.]]&lt;br /&gt;
&lt;br /&gt;
The purpose of the image processing software of this project is to distinguish between different cards.&lt;br /&gt;
&lt;br /&gt;
This is done using a four step method:-&lt;br /&gt;
&lt;br /&gt;
* Find the outline of playing card on the black background&lt;br /&gt;
* Crop and warp the playing card so that it is a perfect rectangle&lt;br /&gt;
* Crop the suit and value images from the top left corner&lt;br /&gt;
* Run Optical Character Recognition software on the suit and value images&lt;br /&gt;
&lt;br /&gt;
The digital image techniques used in the above steps are as follows:-&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hough_transform The Hough Transfer]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Template_matching Pixel Template Matching]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Thresholding_(image_processing) Adaptive Image Thresholding]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Canny_edge_detector Edge Detection]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Blob_detection Blob Detection]&lt;br /&gt;
(Refer links for further details)&lt;br /&gt;
&lt;br /&gt;
[[File:Img proc stages.png|thumb|center|upright=2.0|Image Processing Steps Graphically Represented. (a) Original Image; (b) Outline of Card; (c) Corners of Card; (d)Suit and Value of Card Identified to be cropped]]&lt;br /&gt;
&lt;br /&gt;
====Results====&lt;br /&gt;
The below table show the results of testing carried out on the 24th of October 2014. Scripted batch testing aswell as testing the system as a whole was run.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;  style=&amp;quot;margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|+ &amp;#039;&amp;#039;&amp;#039;Computer Vision Test Results&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|-&lt;br /&gt;
! Run Name&lt;br /&gt;
! Cards Tested&lt;br /&gt;
! Cards Correct&lt;br /&gt;
|-&lt;br /&gt;
| Batch Test&lt;br /&gt;
| 52&lt;br /&gt;
| 52&lt;br /&gt;
|-&lt;br /&gt;
| System Test 1&lt;br /&gt;
| 52&lt;br /&gt;
| 49&lt;br /&gt;
|-&lt;br /&gt;
| System Test 2&lt;br /&gt;
| 52&lt;br /&gt;
| 52&lt;br /&gt;
|-&lt;br /&gt;
| System Test 3&lt;br /&gt;
| 52&lt;br /&gt;
| 50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Robotics and Kinematics ===&lt;br /&gt;
[[File:Robotic arm.JPG|thumb|Inverse kinematic geometry of robotic arm]]&lt;br /&gt;
Arduino program&lt;br /&gt;
* Connected with matlab via usb&lt;br /&gt;
* The input from matlab is all the new angles for each servo &lt;br /&gt;
* Output to matlab when robotic arm has finished moving&lt;br /&gt;
&lt;br /&gt;
Matlab Program&lt;br /&gt;
* Decide which set of movements to use depending on where a card is picked from and where it is placed&lt;br /&gt;
* Use inverse kinematics to determine angles of the robotic arm depending on where the card is to be placed and how high the stack is&lt;br /&gt;
* Forward Kinematic method &lt;br /&gt;
  x = l1(cos(θ1)) + l2(cos(θ1 – θ2)) + l3(cos(θ1 – θ2 – θ3)) &lt;br /&gt;
  y = l1(sin(θ1)) + l2(sin(θ1 – θ2)) + l3(sin(θ1 – θ2 – θ3)) &lt;br /&gt;
* The inverse kinematic method used the geometrical features of the arm to find all the angles of the robotic arm describe in the diagram to the right&lt;br /&gt;
&lt;br /&gt;
* calibrations are made to correct the inverse kinematic method&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t bump in to anything&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t pick up two cards stuck together by the electrostatic force between them&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Card Sorting Algorithms ===&lt;br /&gt;
[[File:bucketSortDiagram.jpg|thumb|Bucket sort.]]&lt;br /&gt;
&lt;br /&gt;
The cards are sorted via one of the following methods chosen using the GUI:&lt;br /&gt;
* Separate Colours&lt;br /&gt;
* Separate Suits&lt;br /&gt;
* Separate Values&lt;br /&gt;
* Select-A-Card (pick cards you want the robot to find)&lt;br /&gt;
* Full Sort (back to a brand new deck order)&lt;br /&gt;
&lt;br /&gt;
The full sort algorithm is based on a bucket/postman and selection sort and occurs in three stages:&lt;br /&gt;
* Cards are partitioned into buckets dependent on their value shown in the image to the right&lt;br /&gt;
* Each bucket is emptied out progressively onto the board&lt;br /&gt;
* Cards are selected from emptied out buckets in order and placed in sorted stacks dependent on their suit&lt;br /&gt;
&lt;br /&gt;
=== Graphical User Interface (GUI) === &lt;br /&gt;
[[File:sortingGUIScreenshot.png|thumb|Screenshot of the sorting GUI.]]&lt;br /&gt;
The GUI was implemented to enhance the projects interactiveness. The GUI aids in showing the viewer the image processing as it happens with live snapshots of images along with live decisions made by the computer vision on the value of the card. Similarly, due to the table showing the entire data structure of cards that have been scanned and sorted, the user can not only track the sorting process but ‘see’ what is below the top card in a given stack.&lt;br /&gt;
&lt;br /&gt;
==Project Significance==&lt;br /&gt;
&lt;br /&gt;
This project acts as a proof of concept for the possible uses of combining computer vision and robotics. It proves that with more time and more advanced hardware the combination of the two could produce systems with great potential. Examples of industries which could benefit from these types of systems include; manufacturing, medical sciences, the military, artificial intelligence and the list goes on. The project also acts as a demonstration to entice future engineering students, and show off the possibilities of electrical engineering.&lt;br /&gt;
&lt;br /&gt;
== Team ==&lt;br /&gt;
=== Group members ===&lt;br /&gt;
* Mr Daniel Currie&lt;br /&gt;
* Mr Daniel Pacher&lt;br /&gt;
* Mr Jonathan Petrinolis&lt;br /&gt;
&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
* Dr Brian Ng&lt;br /&gt;
* Dr Braden Phillips&lt;br /&gt;
&lt;br /&gt;
=== Team Member Responsibilities ===&lt;br /&gt;
The project responsibilities are allocated as follows:&lt;br /&gt;
* Mr Daniel Currie - Image Processing&lt;br /&gt;
* Mr Daniel Pacher - Sorting Algorithms/GUI/Hardware Selection&lt;br /&gt;
* Mr Jonathan Petrinolis - Kinematics/Robotic Arm&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
* Bench 16 in [[Projects Lab]]&lt;br /&gt;
* Lynxmotion AL5D Robotic Arm&lt;br /&gt;
* Arduino Botboarduino Microcontroller&lt;br /&gt;
* Microsoft Lifecam Camera&lt;br /&gt;
* Matlab&lt;br /&gt;
* Computer&lt;/div&gt;</summary>
		<author><name>A1192780</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=1754</id>
		<title>Projects:2014S1-16 Automatic Sorter using Computer Vision</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=1754"/>
		<updated>2014-10-29T14:26:48Z</updated>

		<summary type="html">&lt;p&gt;A1192780: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2014S1|16]]&lt;br /&gt;
&lt;br /&gt;
The aim of this project is to integrate computer vision with autonomous robotics to sort playing cards. The project is primarily a demonstration tool for potential electronic engineering students to interactively show technology capabilities.&lt;br /&gt;
[[File:board_img.jpg|thumb|Completed Project.]]&lt;br /&gt;
&lt;br /&gt;
== Project information ==&lt;br /&gt;
The objectives of the project identified at the beginning are the following:-&lt;br /&gt;
* Sort a full deck of standard playing cards&lt;br /&gt;
* Use computer vision to differentiate between cards&lt;br /&gt;
*Perform the following sorts:&lt;br /&gt;
**Full Sort&lt;br /&gt;
**Suit Sort&lt;br /&gt;
**Colour Sort &lt;br /&gt;
**Value Sort&lt;br /&gt;
* Have a focus on electrical engineering, particularly image processing and reduce mechanical requirements&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This project was tackled by breaking it into four subsections.&lt;br /&gt;
*Computer Vision&lt;br /&gt;
*Robotics&lt;br /&gt;
*Card Sorting Algorithms&lt;br /&gt;
*Graphical User Interface&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Hardware architecture overview.jpg|thumb|Hardware overview.]]&lt;br /&gt;
The system consists of four hardware components: &lt;br /&gt;
* Camera&lt;br /&gt;
* Laptop&lt;br /&gt;
* Microcontroller&lt;br /&gt;
* Robotic Arm&lt;br /&gt;
Their individual functions are detailed in the image to the side.&lt;br /&gt;
&lt;br /&gt;
== Project Breakdown == &lt;br /&gt;
&lt;br /&gt;
=== Image Processing ===&lt;br /&gt;
&lt;br /&gt;
[[File:Image_Proc_Stages_Wiki.jpg|thumb|Image Processing Block Diagram.]]&lt;br /&gt;
&lt;br /&gt;
The purpose of the image processing software of this project is to distinguish between different cards.&lt;br /&gt;
&lt;br /&gt;
This is done using a four step method:-&lt;br /&gt;
&lt;br /&gt;
* Find the outline of playing card on the black background&lt;br /&gt;
* Crop and warp the playing card so that it is a perfect rectangle&lt;br /&gt;
* Crop the suit and value images from the top left corner&lt;br /&gt;
* Run Optical Character Recognition software on the suit and value images&lt;br /&gt;
&lt;br /&gt;
The digital image techniques used in the above steps are as follows:-&lt;br /&gt;
&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Hough_transform The Hough Transfer]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Template_matching Pixel Template Matching]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Thresholding_(image_processing) Adaptive Image Thresholding]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Canny_edge_detector Edge Detection]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Blob_detection Blob Detection]&lt;br /&gt;
(Refer links for further details)&lt;br /&gt;
&lt;br /&gt;
[[File:Img proc stages.png|thumb|center|upright=2.0|Image Processing Steps Graphically Represented. (a) Original Image; (b) Outline of Card; (c) Corners of Card; (d)Suit and Value of Card Identified to be cropped]]&lt;br /&gt;
&lt;br /&gt;
====Results====&lt;br /&gt;
The below table show the results of testing carried out on the 24th of October 2014. Scripted batch testing aswell as testing the system as a whole was run.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;  style=&amp;quot;margin: 1em auto 1em auto;&amp;quot;&lt;br /&gt;
|+ &amp;#039;&amp;#039;&amp;#039;Computer Vision Test Results&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
|-&lt;br /&gt;
! Run Name&lt;br /&gt;
! Cards Tested&lt;br /&gt;
! Cards Correct&lt;br /&gt;
|-&lt;br /&gt;
| Batch Test&lt;br /&gt;
| 52&lt;br /&gt;
| 52&lt;br /&gt;
|-&lt;br /&gt;
| System Test 1&lt;br /&gt;
| 52&lt;br /&gt;
| 49&lt;br /&gt;
|-&lt;br /&gt;
| System Test 2&lt;br /&gt;
| 52&lt;br /&gt;
| 52&lt;br /&gt;
|-&lt;br /&gt;
| System Test 3&lt;br /&gt;
| 52&lt;br /&gt;
| 50&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Robotics and Kinematics ===&lt;br /&gt;
Arduino program&lt;br /&gt;
* Connect with matlab via usb&lt;br /&gt;
* The input from matlab is all the new angles for each servo &lt;br /&gt;
* Output to matlab when robotic arm has finished moving&lt;br /&gt;
&lt;br /&gt;
Matlab Program&lt;br /&gt;
* Decide which set of movements to use depending on where a card is picked from and where it is placed&lt;br /&gt;
* Use inverse kinematics to determine angles of the robotic arm depending on where the card is to be placed and how high the stack is&lt;br /&gt;
* Small calibrations are made to correct the inverse kinematic method&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t bump in to anything&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t pick up two cards stuck together by the electrostatic force between them&lt;br /&gt;
&lt;br /&gt;
[[File:Robotic arm.JPG|thumb|Inverse kinematic geometry of robotic arm]]&lt;br /&gt;
&lt;br /&gt;
=== Card Sorting Algorithms ===&lt;br /&gt;
[[File:bucketSortDiagram.jpg|thumb|Bucket sort.]]&lt;br /&gt;
&lt;br /&gt;
The cards are sorted via one of the following methods chosen using the GUI:&lt;br /&gt;
* Separate Colours&lt;br /&gt;
* Separate Suits&lt;br /&gt;
* Separate Values&lt;br /&gt;
* Select-A-Card (pick cards you want the robot to find)&lt;br /&gt;
* Full Sort (back to a brand new deck order)&lt;br /&gt;
&lt;br /&gt;
The full sort algorithm is based on a bucket/postman and selection sort and occurs in three stages:&lt;br /&gt;
* Cards are partitioned into buckets dependent on their value shown in the image to the right&lt;br /&gt;
* Each bucket is emptied out progressively onto the board&lt;br /&gt;
* Cards are selected from emptied out buckets in order and placed in sorted stacks dependent on their suit&lt;br /&gt;
&lt;br /&gt;
=== Graphical User Interface (GUI) === &lt;br /&gt;
[[File:sortingGUIScreenshot.png|thumb|Screenshot of the sorting GUI.]]&lt;br /&gt;
The GUI was implemented to enhance the projects interactiveness. The GUI aids in showing the viewer the image processing as it happens with live snapshots of images along with live decisions made by the computer vision on the value of the card. Similarly, due to the table showing the entire data structure of cards that have been scanned and sorted, the user can not only track the sorting process but ‘see’ what is below the top card in a given stack.&lt;br /&gt;
&lt;br /&gt;
==Project Significance==&lt;br /&gt;
&lt;br /&gt;
This project acts as a proof of concept for the possible uses of combining computer vision and robotics. It proves that with more time and more advanced hardware the combination of the two could produce systems with great potential. Examples of industries which could benefit from these types of systems include; manufacturing, medical sciences, the military, artificial intelligence and the list goes on. The project also acts as a demonstration to entice future engineering students, and show off the possibilities of electrical engineering.&lt;br /&gt;
&lt;br /&gt;
== Team ==&lt;br /&gt;
=== Group members ===&lt;br /&gt;
* Mr Daniel Currie&lt;br /&gt;
* Mr Daniel Pacher&lt;br /&gt;
* Mr Jonathan Petrinolis&lt;br /&gt;
&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
* Dr Brian Ng&lt;br /&gt;
* Dr Braden Phillips&lt;br /&gt;
&lt;br /&gt;
=== Team Member Responsibilities ===&lt;br /&gt;
The project responsibilities are allocated as follows:&lt;br /&gt;
* Mr Daniel Currie - Image Processing&lt;br /&gt;
* Mr Daniel Pacher - Sorting Algorithms/GUI/Hardware Selection&lt;br /&gt;
* Mr Jonathan Petrinolis - Kinematics/Robotic Arm&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
* Bench 16 in [[Projects Lab]]&lt;br /&gt;
* Lynxmotion AL5D Robotic Arm&lt;br /&gt;
* Arduino Botboarduino Microcontroller&lt;br /&gt;
* Microsoft Lifecam Camera&lt;br /&gt;
* Matlab&lt;br /&gt;
* Computer&lt;/div&gt;</summary>
		<author><name>A1192780</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Robotic_arm.JPG&amp;diff=1752</id>
		<title>File:Robotic arm.JPG</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=File:Robotic_arm.JPG&amp;diff=1752"/>
		<updated>2014-10-29T14:21:13Z</updated>

		<summary type="html">&lt;p&gt;A1192780: The robotic arm diagram used to construct the geometric basis for the inverse kinematics method&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The robotic arm diagram used to construct the geometric basis for the inverse kinematics method&lt;/div&gt;</summary>
		<author><name>A1192780</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=913</id>
		<title>Projects:2014S1-16 Automatic Sorter using Computer Vision</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=913"/>
		<updated>2014-10-09T13:50:28Z</updated>

		<summary type="html">&lt;p&gt;A1192780: /* Robotics and Kinematics p */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2014S1|16]]&lt;br /&gt;
&lt;br /&gt;
== Project information ==&lt;br /&gt;
The aim of this project is to integrate computer vision with autonomous robotics to sort playing cards. The project is primarily a demonstration tool for potential electronic engineering students to interactively show technology capabilities.&lt;br /&gt;
&lt;br /&gt;
== Project Breakdown == &lt;br /&gt;
&lt;br /&gt;
=== Image Processing ===&lt;br /&gt;
&lt;br /&gt;
[[File:Image_Proc_Stages_Wiki.jpg|thumb|Image Processing Block Diagram.]]&lt;br /&gt;
&lt;br /&gt;
The purpose of the image processing software of this project is to distinguish between different cards.&lt;br /&gt;
&lt;br /&gt;
This is done using a four step method:-&lt;br /&gt;
&lt;br /&gt;
* Find the outline of playing card on the black background&lt;br /&gt;
* Crop and warp the playing card so that it is a perfect rectangle&lt;br /&gt;
* Crop the suit and value images from the top left corner&lt;br /&gt;
* Run Optical Character Recognition software on the suit and value images&lt;br /&gt;
&lt;br /&gt;
The digital image techniques used in the above steps are as follows:-&lt;br /&gt;
&lt;br /&gt;
* The Hough Transfer&lt;br /&gt;
* Pixel Template Matching&lt;br /&gt;
* Mean Filtering&lt;br /&gt;
* Adaptive Image Thresholding&lt;br /&gt;
&lt;br /&gt;
=== Robotics and Kinematics p===&lt;br /&gt;
Arduino program&lt;br /&gt;
* Connect with matlab via usb&lt;br /&gt;
* The input from matlab is all the new angles for each servo &lt;br /&gt;
* Output to matlab when robotic arm has finished moving&lt;br /&gt;
&lt;br /&gt;
Matlab Program&lt;br /&gt;
* Decide which set of movements to use depending on where a card is picked from and where it is placed&lt;br /&gt;
* Use inverse kinematics to determine angles of the robotic arm depending on where the card is to be placed and how high the stack is&lt;br /&gt;
* Small calibrations are made to correct the inverse kinematic method&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t bump in to anything&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t pick up two cards stuck together by the electrostatic force between them&lt;br /&gt;
&lt;br /&gt;
=== Card Sorting Algorithms ===&lt;br /&gt;
The cards are sorted via one of the following methods chosen using the GUI:&lt;br /&gt;
* Separate Colours&lt;br /&gt;
* Separate Suits&lt;br /&gt;
* Separate Values&lt;br /&gt;
* Full Sort (back to a new deck order)&lt;br /&gt;
&lt;br /&gt;
The full sort algorithm is based on a selection sort and occurs in two stages:&lt;br /&gt;
* Cards are placed in stacks dependent on their value&lt;br /&gt;
* Cards are selected from these stacks and placed Ace-King in four suit stacks&lt;br /&gt;
&lt;br /&gt;
== Team ==&lt;br /&gt;
=== Group members ===&lt;br /&gt;
* Mr Daniel Currie&lt;br /&gt;
* Mr Daniel Pacher&lt;br /&gt;
* Mr Jonathan Petrinolis&lt;br /&gt;
&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
* Dr Brian Ng&lt;br /&gt;
* Dr Braden Phillips&lt;br /&gt;
&lt;br /&gt;
=== Team Member Responsibilities ===&lt;br /&gt;
The project responsibilities are allocated as follows:&lt;br /&gt;
* Mr Daniel Currie - Image Processing&lt;br /&gt;
* Mr Daniel Pacher - Sorting Algorithms/GUI/Hardware Selection&lt;br /&gt;
* Mr Jonathan Petrinolis - Kinematics/Robotic Arm&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
* Bench 16 in [[Projects Lab]]&lt;br /&gt;
* Lynxmotion AL5D Robotic Arm&lt;br /&gt;
* Arduino Botboarduino Microcontroller&lt;br /&gt;
* Microsoft Lifecam Camera&lt;br /&gt;
* Matlab&lt;br /&gt;
* Computer&lt;/div&gt;</summary>
		<author><name>A1192780</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=912</id>
		<title>Projects:2014S1-16 Automatic Sorter using Computer Vision</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2014S1-16_Automatic_Sorter_using_Computer_Vision&amp;diff=912"/>
		<updated>2014-10-09T13:50:01Z</updated>

		<summary type="html">&lt;p&gt;A1192780: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2014S1|16]]&lt;br /&gt;
&lt;br /&gt;
== Project information ==&lt;br /&gt;
The aim of this project is to integrate computer vision with autonomous robotics to sort playing cards. The project is primarily a demonstration tool for potential electronic engineering students to interactively show technology capabilities.&lt;br /&gt;
&lt;br /&gt;
== Project Breakdown == &lt;br /&gt;
&lt;br /&gt;
=== Image Processing ===&lt;br /&gt;
&lt;br /&gt;
[[File:Image_Proc_Stages_Wiki.jpg|thumb|Image Processing Block Diagram.]]&lt;br /&gt;
&lt;br /&gt;
The purpose of the image processing software of this project is to distinguish between different cards.&lt;br /&gt;
&lt;br /&gt;
This is done using a four step method:-&lt;br /&gt;
&lt;br /&gt;
* Find the outline of playing card on the black background&lt;br /&gt;
* Crop and warp the playing card so that it is a perfect rectangle&lt;br /&gt;
* Crop the suit and value images from the top left corner&lt;br /&gt;
* Run Optical Character Recognition software on the suit and value images&lt;br /&gt;
&lt;br /&gt;
The digital image techniques used in the above steps are as follows:-&lt;br /&gt;
&lt;br /&gt;
* The Hough Transfer&lt;br /&gt;
* Pixel Template Matching&lt;br /&gt;
* Mean Filtering&lt;br /&gt;
* Adaptive Image Thresholding&lt;br /&gt;
&lt;br /&gt;
=== Robotics and Kinematics p===&lt;br /&gt;
Arduino program&lt;br /&gt;
* Connect with matlab via usb&lt;br /&gt;
* The input from matlab is all the new angles for each servo &lt;br /&gt;
* output to matlab when robotic arm has finished moving&lt;br /&gt;
&lt;br /&gt;
Matlab Program&lt;br /&gt;
* Decide which set of movements to use depending on where a card is picked from and where it is placed&lt;br /&gt;
* Use inverse kinematics to determine angles of the robotic arm depending on where the card is to be placed and how high the stack is&lt;br /&gt;
* Small calibrations are made to correct the inverse kinematic method&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t bump in to anything&lt;br /&gt;
* Movements are included to ensure robotic arm doesn&amp;#039;t pick up two cards stuck together by the electrostatic force between them&lt;br /&gt;
&lt;br /&gt;
=== Card Sorting Algorithms ===&lt;br /&gt;
The cards are sorted via one of the following methods chosen using the GUI:&lt;br /&gt;
* Separate Colours&lt;br /&gt;
* Separate Suits&lt;br /&gt;
* Separate Values&lt;br /&gt;
* Full Sort (back to a new deck order)&lt;br /&gt;
&lt;br /&gt;
The full sort algorithm is based on a selection sort and occurs in two stages:&lt;br /&gt;
* Cards are placed in stacks dependent on their value&lt;br /&gt;
* Cards are selected from these stacks and placed Ace-King in four suit stacks&lt;br /&gt;
&lt;br /&gt;
== Team ==&lt;br /&gt;
=== Group members ===&lt;br /&gt;
* Mr Daniel Currie&lt;br /&gt;
* Mr Daniel Pacher&lt;br /&gt;
* Mr Jonathan Petrinolis&lt;br /&gt;
&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
* Dr Brian Ng&lt;br /&gt;
* Dr Braden Phillips&lt;br /&gt;
&lt;br /&gt;
=== Team Member Responsibilities ===&lt;br /&gt;
The project responsibilities are allocated as follows:&lt;br /&gt;
* Mr Daniel Currie - Image Processing&lt;br /&gt;
* Mr Daniel Pacher - Sorting Algorithms/GUI/Hardware Selection&lt;br /&gt;
* Mr Jonathan Petrinolis - Kinematics/Robotic Arm&lt;br /&gt;
&lt;br /&gt;
== Resources ==&lt;br /&gt;
* Bench 16 in [[Projects Lab]]&lt;br /&gt;
* Lynxmotion AL5D Robotic Arm&lt;br /&gt;
* Arduino Botboarduino Microcontroller&lt;br /&gt;
* Microsoft Lifecam Camera&lt;br /&gt;
* Matlab&lt;br /&gt;
* Computer&lt;/div&gt;</summary>
		<author><name>A1192780</name></author>
		
	</entry>
</feed>