Projects:2014s2-75 Formation Control of Two Autonomous Smart Cars

From Projects
Revision as of 00:40, 4 June 2015 by A1652577 (talk | contribs) (Field of view)
Jump to: navigation, search

Aim

The aim of this project is to build a model of two smart cars which can move independently. The smart cars will not be controlled by any human intervention for its movement. The two cars should have autonomous control and be able to recognize the destination and should move to the destination in a definite path. There are two major modes in which the cars should move. The control mode and the signal relay mode. In control mode, the cars will move in a chase dodge model. That is one car chases the other one in a definite speed. The other car recognizes the chase and tries to avoid the first car. At a particular distance between them, the cars change their behavior and the dodger turns back and pursues the chaser. The chaser from the first scenario, turns away in order to evade the other car. In signal relay mode, the movements of both the cars are integrated together to send a signal from a source to destination. Car 1 carries a signal or information to a particular location. Car 2 moves from its initial position to the same location. Car 1 supplies the data to Car 2 and Car 2 carries the information from the location of the first car to the final destination.

Background and Significance

Over the years, driving has become one of the biggest life threatening risks. The rapid increase in the number of vehicles has proportionally increased the number of accidents. Studies show that about 1.24 million deaths occur due to traffic accidents in a year across the globe. Carelessness, drink driving and speeding are the major reasons for the cause of traffic accidents. Despite the safety advancements such as abs, airbags, anti-collision systems cars prove to be more dangerous than any other modes of transport such as buses, flight or trains. Apart from creating accidents, driving has additional disadvantages such as increase in stress and fatigue. Cases have been reported even of mental illness caused due to long distances and congested traffic driving. Autonomous cars could prove to be a solution for this situation. If a sophisticated system can be built, the smart cars can decrease the accidents due to human errors. Also, passengers can relax without the agony of driving. There has been significant improvement in the studies of smart cars in the recent past. Google’s autonomous smart car project has gone on to test the smart cars in real environment. GPS is an important factor in this autonomous smart car implementation. The GPS should not only provide the location and coordinates the car is driving but should provide detailed information about the environment which the car is driven in, such as color of traffic lights, curb width, height of the bump etc. Google uses its maps and satellite images for finding the path and sensors to find the inertia and wheel encoders for calculating speed. Due to constraints in finding the color of the traffic lights due to glare or rain, difficulties in detailed mapping of the whole world and in decision making smart car technology is still in its development stages. Intelligent Car is a branch of intelligent robot; it is a system which include automatic control, artificial intelligent, mechanical engineering, and image processing and computer sciences. The main difficulty in intelligent car is image processing. The accuracy of the image processing directly impact intelligent car’s driving directions, driving speed and the ability to dodge obstacles. The technique of moving target’s detecting and tracking are the main parts of image processing The image processing techniques and software have improved due to the introduction of advanced software, improved processing capabilities, digital image processing techniques and the improvement in hardware. Several methods have been introduced for processing the images and finding moving targets. The images we view are in RGB. These images are converted to HSV for processing. The cam shift algorithm, pixel processing, background subtraction, Gaussian distribution and noise elimination are some of the modern techniques used in image processing.

Motivation

This project deals with some of the ways in which the design of autonomous movements of the car can be managed. The robotic technology and artificial intelligence systems proposed in this project can be used in robotics and traffic advancements. The project gives hands on experience in dealing with the Arduino robotic technology which is a starting step for intense robotic technologies. Traffic signalling and camera systems use significant amount of image processing in the near past. The image processing used in the project will be a guideline for detecting moving target and controlling them. The project gives a guideline for using image processing technology for finding the target. The technology can be used in machine vision and medical imaging. With the use of accurate systems such as gyroscope in the future, the project can be used in remotely controlling the robots to reach and investigate inaccessible and congested areas.

Requirements

The project requirements can be classified in to hardware design and implementation and software application. The major requirements of the project are: -

• Research and design a hardware system which can serve as the model of two smart cars and a control system which is to be used as the nerve center of the process

• The hardware design should consist of a model of two smart cars which must move independently

• Design and implementation of a core algorithm to perform the chase dodge model

• Implementation of tracking system in order to track the locations of the car in real time

• Implementation of a communication system for the cars to recognize the locations from the tracking system and implementation of a drive and power system for the cars so that the cars follow the algorithm for the movements.

Hardware Design and Methodology

The Arduino robots will be used as the model of the smart cars. Colour plates of colours red (RGB value) and green (RGB value) are fixed above the two robots respectively. The camera captures the images of the two robots and the background image. This image is send to the computer. A network router is used for exchanging the data from the camera Wi-Fi to the Wi-Fi network of the computer. The computer receives the images in packets of data. This data is combined to reproduce the image. Using programming (OpenCV) the location of the colour boards are recognised. Using the centre of mass method the centre point of the colour boards -which are the centre points of the robots- are found out. The coordinates of these central points are sent to the robots though the local Wi-Fi of the computer. With the help of Arduino Wi-Fi shield, the Arduino robots receive the coordinates of its location and the coordinates of other robots location. The robots are programmed in Arduino IDE program in order to realise the algorithm of chaser dodger model and the relay race model. To find out the orientation and direction the Arduino robots use the lantern fish method which provides the angle of deviation from geographic north. In the relay race model, the Wi-Fi Shields of one robot receives the data from the computer and moves to a particular location. From that location it shares the data with the other robot. The receiver robot in turn moves from the initial position to the destination where it delivers the data.

Determining Camera Specifications

Requirements

Height of the camera from the ground = 2.5m

Length x breadth of the field = 5m x 5m

Area to be covered by the camera = 5m x 5m =25m^2

Resolution >= 1cm

Frame rate >= 20fps

Support Wi-Fi with 802.11b/g network LAN Network protocol TCP/IP

Calculations

Field of view

Height of the camera from field = 2.5m

Distance to the farthest view on X axis = 2.5m




Angle from camera to farthest side = arctan(2.5/2.5) = 45°

Field of view in x axis = 2×45° = 90°

Angle from camera to farthest side = arctan(2.5/5.6) = 71°

Field of view in y axis = 71°

So minimum camera field of view = 90°

Resolution

Area to be covered = 25m^2

Resolution required in cm = 1cm

So,minimum resolution required in pixels = 500pixels = 0.5MP

Experiments

Lighting LEDs

One requirement of the project is to find out the suitable solution for the camera to detect the robots. Plan was proposed to use light LEDs above the robot so that the camera can detect. Programming and hardwiring was done in order to light the LEDs above the robots. But the actual results show that camera finds it difficult to detect the LEDs especially in the daylight glare. So a colour plate is used instead.

Wheel Calibration

Wheel calibration program is used to help the robot as straight as possible. This is done by dividing the voltages to the wheel motors proportionately. The potentiometer trim on the motor board of the Arduino divides the voltage to the wheels of the robot by a voltage divider. By controlling the potentiometer trim, the robot can be moved reasonably straight. Output is transferred as voltages to the wheels, trim values and instructions to the screen. The program reads the value of the potentiometer trim and displays it to the screen. Also programmed instructions are displayed on the screen for the user to make the adjustments. The values of the compass are constantly read to find out whether the robot moves in a particular direction. A screw driver can be used to turn the trim clockwise and anti-clockwise in order to distribute the voltages to the motors.

Robot Orientation

The robot deviates from its straight path, especially when moving large distances even after the wheel calibration execution. This is due to the imperfections in the shape of the tyres and difference in size between the two tyres. In order to eliminate these deviations a constant feedback should be given to find the direction in which the robot is heading and to compare it with the previous direction. The feedback used in this experiment is from the compass. The compass is read in each loop of the program. Initially equal voltage is given to both the wheels of the robot. The program reads the compass and compares it with the previous compass value. If it returns a difference, the voltage to the right wheel is varied by keeping the voltage to the left wheel constant. A positive error voltage is given to the right wheel if the robot tends to move right and a negative error voltage which decreases the original voltage is given otherwise. If the deviation tends to be a large value, the error voltage will be higher. This helps the robot to come back to its original orientation. The experiment is done in a controlled environment without the interference from external magnetism.

Eliminating oscillations

The feedbacks given to the robot motors make it unstable and force it to go in an oscillatory path. This makes the robot to wobble around. This is due to the behaviour of wheels to the feedback. The inertia of the motor tends to overcompensate for the deviations from the straight path which causes it to move to the other direction. This can be eliminated by controlling the error voltage supplied to the wheels. When the error voltage is more than a particular value, the program limits the error to its maximum value which in turn limits the voltage to the tyre. This helps the robot to move in a smooth direction and eliminates oscillations. The maximum value of the error voltage is found out using the trial and error method and is found as 0.39V.

Orientation control

The images send by the camera is processed by the computer and locations of the robots are identified. These locations are sent to the robots through Wi-Fi in terms of their coordinates. The program finds the direction in which the robot is pointing and the direction in which the chaser robot should turn in order to chase the dodger robot. The tangent function is the trigonometry and mathematical calculations are used to return the orientation to which the robot should turn. The program helps the robot to turn and move to the location. Once the orientation to which the robot needs to turn is determined, the program sends positive voltage to one wheel and negative voltage to the other. This makes one wheel to move forward and the other to roll backward creating the rotating motion of the robot on its axis. Feedback is read from the compass to determine whether the robot reached the orientation required. Once the orientation is reached, the polarity of the voltages to both the wheels is made in the positive direction which makes the robot move in a straight line. The value of voltages given to the wheels during rotation is directly proportional to the amount of rotation required. That is, when the robot nears the determined orientation, the speed of rotation of motors is reduced so that the torque and inertia is reduced to eliminate overshoot rotation. Also decision whether the robot needs to rotate clockwise or anti-clockwise depends on the angle of deviation required. That is if the robot needs the rotate less if it is rotating in anti-clockwise direction than clockwise direction then anti-clockwise rotation is implemented and vice versa.

Conclusion

The aims of the project were completed successfully. The project was able to find a solution for designing the smart cars. The use of camera and image processing is proposed as the output from the project. The control mode in which the cars can move independently and the signal relay mode in which the cars share the tasks were successfully completed. The proposed Bluetooth triangulation method was replaced with the camera due the infeasibility in finding the range using Bluetooth triangulation. The failure of compass as a direction sensor during magnetic interferences affected the project. But the project realised that modern equipment like gyroscope can replace the compass technology. The communication part was proposed to be using Bluetooth but was replaced with Wi-Fi technology due to its increased bandwidth and also less interference.

Future Work

In future the compass module and image processing can be replaced with gyroscopes. The gyroscope is a machine which can find the direction even in the absence of earth’s magnetism and interference from other magnetic equipment. The project can be improved using multiple robots and tracking them. The features such as LIDAR technology which uses laser to find out the external environment, SONAR which sends sound waves to find out the distance and behaviour of the system, RADAR, and IR sensors can be added for increased accuracy and feedback. The image processing used in the project can be improved for high speed measurements when the vehicles move faster. A local vision system in which the camera is place above each car can track the environment.

Other applications

Apart from the system of smart cars, the project can be extended in the mode of robotic technology. The robot can be controlled remotely to access congested areas and track other targets. The image processing technology used in the project can be extended to create technologies for image processing in traffic signals and for facial recognition.