Difference between revisions of "Projects:2017s1-170 Formation Control and Obstacle Avoidance for Heterogeneous Multi-Agent Systems (Unmanned Aerial Vehicles and Robots)"
(→System Strucutre) |
|||
Line 15: | Line 15: | ||
The working mechanism of this system is, firstly, the OptiTrack camera system is used to detect surrounding environment, to identify UAV and UGV and to locate UAV and UGV which would run in such environment. Secondly, locations of these vehicles could be transmitted from the camera system to the controller in Simulink in a host computer, so that the controller could convert these input location data into spiral velocity of four airscrews of UAV and velocity of two wheels in UGV. Lastly, after receiving above velocity data, UAV could move to any expected destination while UGV can follow UAV in certain formation, bypassing any obstacle in motion. The illustration of how this project work is shown in Figure 1.2. The following sections would have a focus on path planning and obstacle avoidance and explore what method could be used to address this issue. | The working mechanism of this system is, firstly, the OptiTrack camera system is used to detect surrounding environment, to identify UAV and UGV and to locate UAV and UGV which would run in such environment. Secondly, locations of these vehicles could be transmitted from the camera system to the controller in Simulink in a host computer, so that the controller could convert these input location data into spiral velocity of four airscrews of UAV and velocity of two wheels in UGV. Lastly, after receiving above velocity data, UAV could move to any expected destination while UGV can follow UAV in certain formation, bypassing any obstacle in motion. The illustration of how this project work is shown in Figure 1.2. The following sections would have a focus on path planning and obstacle avoidance and explore what method could be used to address this issue. | ||
+ | == Kinematics of UGV == | ||
+ | As described below, this project will involve unmanned ground vehicles to perform formation control and path planning with obstacle avoidance, which is my task to complete. Thus, it is necessary to explain the kinematics of unmanned ground vehicles and how to control them to reach desired position and to run on expected trajectory. | ||
+ | Introduction of Unmanned ground robot | ||
+ | In this project, one mature product will be used called as QBot 2, which is an innovative ground autonomous ground robot designed by Quanser. QBot 2 consists of a Yujin Robot Kobuki platform, a Microsoft Kinect RGB camera and depth sensor, and a Quanser DAQ with a wireless embedded computer [2]. The appearance of QBot2 is shown in Figure 3.1. | ||
+ | |||
+ | Figure 3.1 The appearance of Qbot 2 | ||
+ | A computer system is embedded to the vehicle, using the Gumstix DuoVero computer to run QUARC, Quanser's real-time control software, and interfacing with the QBot 2 data acquisition card (DAQ) [2]. The interface to the target computer is Matlab Simulink with QUARC. The QBot 2 has different modules to work: the Quanser Hardware-In-the-Loop module to receive signals from sensors and to output signals, the Quanser Stream API module to set up wireless communications, the Quanser Multimedia module to read RGB and depth image data from the Kinect sensor, and the MathWorks Computer Vision System toolbox to perform image processing. The diagram of this configuration is shown in Figure 3.2. | ||
+ | |||
+ | Figure 3.2 The configuration of this system | ||
+ | The QBot 2 uses a Kobuki mobile robot platform that is comprised of many signal ports and sensors to perform multiple functions. The functions of QBot 2 is shown in Figure 3.3. | ||
+ | |||
+ | Figure 3.3 The functions of QBot 2 | ||
+ | Because in this project, the ground robot is mainly used to perform formation control and dynamic path planning with the vision position from another comprehensive camera system designed by Xiaoyuan Wang, many functions of Qbot 2 will not be introduced in detail. Importantly, QBot 2 is driven by two differential drive wheels with built-in encoders, which means that we can only change the velocity of two different wheels to make QBot 2 run as expected. The structure of QBot 2 with two wheels is shown in Figure 3.4. | ||
+ | |||
+ | Figure 3.4 The structure of QBot 2 with two wheels | ||
+ | Lastly, the specifications of QBot2 should be introduced, since the parameters here are much related to the following Kinematic section. For example, the maximum speed will determine the how fast QBot2 could reach one position; the diameter of the QBot2, especially the distance between two wheels will determine the trajectory of QBot2 with certain velocity on two wheels. The specifications of QBot 2 is shown in Table 3.1. | ||
+ | . Table 3.1 QBot 2 specifications | ||
+ | Symbol Description Value Unit | ||
+ | D Diameter of the QBot 2 0.35 m | ||
+ | d Distance between the left and right wheels 0.235 m | ||
+ | h Height of the QBot 2 (with Kinect mounted) 0.27 m | ||
+ | v_max Maximum speed of the QBot 2 0.7 m/s | ||
+ | m Total mass of the QBot 2 3.79 kg | ||
+ | Kinematic of Unmanned ground robot | ||
+ | As discussed before, the Quanser QBot 2 Mobile Platform is made up of two central drive wheels and they are located on a common axis that bisects the ground robot. This drive configuration is called as differential drive which forms mathematical relationship of independent motion of two wheels to the overall movement of the robot chassis [1]. The motion of each wheel is detected by encoders, and the robot's orientation angle are estimated using the inertial gyro or could be calculated, which will be described in the following part. To understand the relationship between independent motion of the two wheels and the motion of the overall robot, the motion of the robot about a common point is modelled., which is shown below in Figure 3.5. | ||
+ | |||
+ | Figure 3.5 QBot 2 Mobile Platform Reference Frame Definitions | ||
+ | Let the radius of the wheels be denoted by r, and the wheel rotational speed be denoted by ω_L and ω_R for the left and right wheel respectively. The velocity of the two wheels on a 2-dimensiona space is described: | ||
+ | |||
+ | If the wheel has no slippage when running, the QBot 2 can move along the horizontal plane in a straight line or curved trajectory, or even spin on a spot, which means that the robot will definitely rotate around a certain point. Even when the velocity of two wheels are the same, leading the robot to move in a straight line, there is still this rotation point although it is at infinity [1]. This centre of rotation is named as the Instantaneous Center of Curvature (ICC). Let r_ICC be the distance between ICC and the centre of the robot chassis. If d is the distance between the left and right wheels, θ is the heading angle of the robot, and v_C is the (forward/backward) speed of the robot chassis centre, the motion of the QBot 2 chassis can be presented in the following equations: | ||
+ | v_C= θ ̇r_ICC | ||
+ | v_L = θ ̇(r_ICC-d/2) | ||
+ | v_R = θ ̇(r_ICC+d/2) | ||
+ | v_C, v_L and v_R are all defined along the same axis, lying in the forward/backward direction of the chassis. With the wheel speed, v_L and v_R, the robot speed, v_C, the angular rate, ω_C= θ ̇, and the distance from ICC, r_ICC, we can get the relationship between the motion of the wheels and the motion of the robot: | ||
+ | v_C= (v_L+v_R)/2 | ||
+ | ω_C= θ ̇= (v_R-v_L)/d | ||
+ | r_ICC=(d(v_R+v_L))/(2(v_R-v_L)) | ||
+ | In another situation, the inverse kinematics model calculates the required wheel speed to obtain a desired robot chassis speed v_C, and angular rate ω_C. Based on these two variables, the wheel speed v_L and v_R could be directly computed: | ||
+ | [■(v_R@v_L )]=[■(v_C+1/2 dω_C@v_C-1/2 dω_C )] | ||
+ | The above kinematics model is derived in a local frame of reference. In other words, the chassis speed, v_C, is defined with respect to the robot itself and not in the global frame that could apply to the whole environment. Because the robot chassis’ orientation changes as the angular rate is not zero, ω_C= θ ̇, we need to transform kinematics model including all motion variables from the local reference system to the global reference system, to derive the robot motion in the global reference frame [1]. If a robot’s orientation angle is θ, the transformation could be described in the following matrix: | ||
+ | R=[■(cosθ&-sinθ&0@sinθ&cosθ&0@0&0&1)] | ||
+ | A state vector, S, is defined including the position, x and y, and the orientation angle, θ. It could be presented in a vector: | ||
+ | S= [■(x@y@θ)], S ̇=[■(x ̇@y ̇@θ ̇ )] | ||
+ | The x and y axes lie in the 2-dimension space where the robot moves. The heading, θ, is defined with respect to x axis, which means the heading is zero, (θ=0), when the robot chassis' forward direction aligns with x axis [1]. The differential of the states can be expressed using the robot chassis speed, v_C, and angular rate, ω_C: | ||
+ | S ̇=[■(x ̇@y ̇@θ ̇ )]=[■(v_C cosθ@v_C sinθ@ω_C )]= [■((v_L+v_R)/2 cosθ@(v_L+v_R)/2 sinθ @(v_R-v_L)/d)] | ||
+ | Likewise, the position of the Instantaneous Centre of Curvature (ICC) in space, (x_ICC and y_ICC), could be expressed in the global reference frame: | ||
+ | [■(x_ICC@y_ICC@0)]= [■(x@y@0)]+R[■(0@r_ICC@0)]=[■(x-r_ICC sinθ@y+r_ICC sinθ@0)]=[■(x-(d(v_R+v_L))/(2(v_R-v_L)) sinθ@y+(d(v_R+v_L))/(2(v_R-v_L)) sinθ@0)] | ||
+ | Given the robot chassis state vector, S(t), and its rate of change, S ̇(t), expressed in the global reference frame, the robot pose at time, t, can be computed as follows: | ||
+ | S(t)= [■(x(t)@y(t)@θ(t) )]=∫_0^t▒〖S ̇(t)dt〗 | ||
+ | For the QBot 2, given the wheel separation, d, the heading, θ, and the wheel speed, v_L and v_R, the equations of motion for odometric localization are expressed as follows: | ||
+ | S(t)= [■(x(t)@y(t)@θ(t) )]=∫_0^t▒[■((v_L (t)+v_R (t))/2 cosθ(t)@(v_L (t)+v_R (t))/2 sinθ(t) @(v_R (t)-v_L (t))/d)]dt | ||
+ | To get the integration, we may need to introduce a small time step, δ(t), so the approximation of the integration is like a first order Talyor series expansion: | ||
+ | S(t+δ(t))=S(t)+S ̇(t)δ(t)= [■(x(t)@y(t)@θ(t) )]=[■((v_L (t)+v_R (t))/2 cosθ(t)@(v_L (t)+v_R (t))/2 sinθ(t) @(v_R (t)-v_L (t))/d)]δ(t) |
Revision as of 14:55, 3 November 2017
Introduction
This project is to build an autonomous heterogeneous multi-vehicle system where several unmanned ground robots (UGV) and unmanned aerial vehicle (UAV) could achieve obstacle avoidance and formation control. In detail, UAV could move forward to any destination by controlling four airscrews on each side of UAV while two UGVs which are run by two wheels on each side would follow UAV as followers. In addition, UGVS could follow the UAV in certain formation, bypassing all obstacles on ground during moving process. After encountering an obstacle in moving, UGV could temporarily jump out of the formation shape and take bypassing this obstacle as the priority. If there are several obstacles on ground, the UGV could also find an optimal path to follow the UAV as the leader.
Motivation
................
System Strucutre
The whole system consists of such major parts: UAV, UGV, Optitrack camera system and a computer connected to above systems through Wifi and a wireless modem to release Wifi signal as communication channel. All equipment is shown in Figure 1.1.
Figure 1.1 Hardware of this system
The working mechanism of this system is, firstly, the OptiTrack camera system is used to detect surrounding environment, to identify UAV and UGV and to locate UAV and UGV which would run in such environment. Secondly, locations of these vehicles could be transmitted from the camera system to the controller in Simulink in a host computer, so that the controller could convert these input location data into spiral velocity of four airscrews of UAV and velocity of two wheels in UGV. Lastly, after receiving above velocity data, UAV could move to any expected destination while UGV can follow UAV in certain formation, bypassing any obstacle in motion. The illustration of how this project work is shown in Figure 1.2. The following sections would have a focus on path planning and obstacle avoidance and explore what method could be used to address this issue.
Kinematics of UGV
As described below, this project will involve unmanned ground vehicles to perform formation control and path planning with obstacle avoidance, which is my task to complete. Thus, it is necessary to explain the kinematics of unmanned ground vehicles and how to control them to reach desired position and to run on expected trajectory. Introduction of Unmanned ground robot In this project, one mature product will be used called as QBot 2, which is an innovative ground autonomous ground robot designed by Quanser. QBot 2 consists of a Yujin Robot Kobuki platform, a Microsoft Kinect RGB camera and depth sensor, and a Quanser DAQ with a wireless embedded computer [2]. The appearance of QBot2 is shown in Figure 3.1.
Figure 3.1 The appearance of Qbot 2 A computer system is embedded to the vehicle, using the Gumstix DuoVero computer to run QUARC, Quanser's real-time control software, and interfacing with the QBot 2 data acquisition card (DAQ) [2]. The interface to the target computer is Matlab Simulink with QUARC. The QBot 2 has different modules to work: the Quanser Hardware-In-the-Loop module to receive signals from sensors and to output signals, the Quanser Stream API module to set up wireless communications, the Quanser Multimedia module to read RGB and depth image data from the Kinect sensor, and the MathWorks Computer Vision System toolbox to perform image processing. The diagram of this configuration is shown in Figure 3.2.
Figure 3.2 The configuration of this system The QBot 2 uses a Kobuki mobile robot platform that is comprised of many signal ports and sensors to perform multiple functions. The functions of QBot 2 is shown in Figure 3.3.
Figure 3.3 The functions of QBot 2 Because in this project, the ground robot is mainly used to perform formation control and dynamic path planning with the vision position from another comprehensive camera system designed by Xiaoyuan Wang, many functions of Qbot 2 will not be introduced in detail. Importantly, QBot 2 is driven by two differential drive wheels with built-in encoders, which means that we can only change the velocity of two different wheels to make QBot 2 run as expected. The structure of QBot 2 with two wheels is shown in Figure 3.4.
Figure 3.4 The structure of QBot 2 with two wheels Lastly, the specifications of QBot2 should be introduced, since the parameters here are much related to the following Kinematic section. For example, the maximum speed will determine the how fast QBot2 could reach one position; the diameter of the QBot2, especially the distance between two wheels will determine the trajectory of QBot2 with certain velocity on two wheels. The specifications of QBot 2 is shown in Table 3.1. . Table 3.1 QBot 2 specifications Symbol Description Value Unit D Diameter of the QBot 2 0.35 m d Distance between the left and right wheels 0.235 m h Height of the QBot 2 (with Kinect mounted) 0.27 m v_max Maximum speed of the QBot 2 0.7 m/s m Total mass of the QBot 2 3.79 kg Kinematic of Unmanned ground robot As discussed before, the Quanser QBot 2 Mobile Platform is made up of two central drive wheels and they are located on a common axis that bisects the ground robot. This drive configuration is called as differential drive which forms mathematical relationship of independent motion of two wheels to the overall movement of the robot chassis [1]. The motion of each wheel is detected by encoders, and the robot's orientation angle are estimated using the inertial gyro or could be calculated, which will be described in the following part. To understand the relationship between independent motion of the two wheels and the motion of the overall robot, the motion of the robot about a common point is modelled., which is shown below in Figure 3.5.
Figure 3.5 QBot 2 Mobile Platform Reference Frame Definitions Let the radius of the wheels be denoted by r, and the wheel rotational speed be denoted by ω_L and ω_R for the left and right wheel respectively. The velocity of the two wheels on a 2-dimensiona space is described:
If the wheel has no slippage when running, the QBot 2 can move along the horizontal plane in a straight line or curved trajectory, or even spin on a spot, which means that the robot will definitely rotate around a certain point. Even when the velocity of two wheels are the same, leading the robot to move in a straight line, there is still this rotation point although it is at infinity [1]. This centre of rotation is named as the Instantaneous Center of Curvature (ICC). Let r_ICC be the distance between ICC and the centre of the robot chassis. If d is the distance between the left and right wheels, θ is the heading angle of the robot, and v_C is the (forward/backward) speed of the robot chassis centre, the motion of the QBot 2 chassis can be presented in the following equations: v_C= θ ̇r_ICC v_L = θ ̇(r_ICC-d/2) v_R = θ ̇(r_ICC+d/2) v_C, v_L and v_R are all defined along the same axis, lying in the forward/backward direction of the chassis. With the wheel speed, v_L and v_R, the robot speed, v_C, the angular rate, ω_C= θ ̇, and the distance from ICC, r_ICC, we can get the relationship between the motion of the wheels and the motion of the robot: v_C= (v_L+v_R)/2 ω_C= θ ̇= (v_R-v_L)/d r_ICC=(d(v_R+v_L))/(2(v_R-v_L)) In another situation, the inverse kinematics model calculates the required wheel speed to obtain a desired robot chassis speed v_C, and angular rate ω_C. Based on these two variables, the wheel speed v_L and v_R could be directly computed: [■(v_R@v_L )]=[■(v_C+1/2 dω_C@v_C-1/2 dω_C )] The above kinematics model is derived in a local frame of reference. In other words, the chassis speed, v_C, is defined with respect to the robot itself and not in the global frame that could apply to the whole environment. Because the robot chassis’ orientation changes as the angular rate is not zero, ω_C= θ ̇, we need to transform kinematics model including all motion variables from the local reference system to the global reference system, to derive the robot motion in the global reference frame [1]. If a robot’s orientation angle is θ, the transformation could be described in the following matrix: R=[■(cosθ&-sinθ&0@sinθ&cosθ&0@0&0&1)] A state vector, S, is defined including the position, x and y, and the orientation angle, θ. It could be presented in a vector: S= [■(x@y@θ)], S ̇=[■(x ̇@y ̇@θ ̇ )] The x and y axes lie in the 2-dimension space where the robot moves. The heading, θ, is defined with respect to x axis, which means the heading is zero, (θ=0), when the robot chassis' forward direction aligns with x axis [1]. The differential of the states can be expressed using the robot chassis speed, v_C, and angular rate, ω_C: S ̇=[■(x ̇@y ̇@θ ̇ )]=[■(v_C cosθ@v_C sinθ@ω_C )]= [■((v_L+v_R)/2 cosθ@(v_L+v_R)/2 sinθ @(v_R-v_L)/d)] Likewise, the position of the Instantaneous Centre of Curvature (ICC) in space, (x_ICC and y_ICC), could be expressed in the global reference frame: [■(x_ICC@y_ICC@0)]= [■(x@y@0)]+R[■(0@r_ICC@0)]=[■(x-r_ICC sinθ@y+r_ICC sinθ@0)]=[■(x-(d(v_R+v_L))/(2(v_R-v_L)) sinθ@y+(d(v_R+v_L))/(2(v_R-v_L)) sinθ@0)] Given the robot chassis state vector, S(t), and its rate of change, S ̇(t), expressed in the global reference frame, the robot pose at time, t, can be computed as follows: S(t)= [■(x(t)@y(t)@θ(t) )]=∫_0^t▒〖S ̇(t)dt〗 For the QBot 2, given the wheel separation, d, the heading, θ, and the wheel speed, v_L and v_R, the equations of motion for odometric localization are expressed as follows: S(t)= [■(x(t)@y(t)@θ(t) )]=∫_0^t▒[■((v_L (t)+v_R (t))/2 cosθ(t)@(v_L (t)+v_R (t))/2 sinθ(t) @(v_R (t)-v_L (t))/d)]dt To get the integration, we may need to introduce a small time step, δ(t), so the approximation of the integration is like a first order Talyor series expansion: S(t+δ(t))=S(t)+S ̇(t)δ(t)= [■(x(t)@y(t)@θ(t) )]=[■((v_L (t)+v_R (t))/2 cosθ(t)@(v_L (t)+v_R (t))/2 sinθ(t) @(v_R (t)-v_L (t))/d)]δ(t)