Difference between revisions of "Projects:2017s1-170 Formation Control and Obstacle Avoidance for Heterogeneous Multi-Agent Systems (Unmanned Aerial Vehicles and Robots)"

From Projects
Jump to: navigation, search
(Illustration of system blocks)
 
(12 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''Description'''
 
  
The objective of this project is to build an autonomous heterogeneous multi-vehicle system. Such system consists of UAVs and ground robots and performs formation maintenance and manipulation with obstacle avoidance. The platform is implemented from vision positioning, cooperation protocol and obstacle avoidance algorithm. Potential applications can be seen in ground-to-air multi-task coordination, such as terrain exploration, disaster rescue and traffic control. The project involves 1-2 Quadrotor UAVs, 3-4 robots and multiple cameras to locate the agents and obstacles. The robot will be modeled as two-wheel discs, while the quadrotor will be described by aerodynamic analysis. The communication channels between the vehicles (robots and UAVs) and the computer can be established through WiFi. Based on the positions and attitudes of the vehicles, the coordination algorithm will be designed for UAVs and robots to operate under some user-defined formations. Meanwhile, obstacles avoidance and inter-robot collision avoidance need to be considered.
+
== Introduction ==
 +
 
 +
This project is to build an autonomous heterogeneous multi-vehicle system where several unmanned ground robots (UGV) and unmanned aerial vehicle (UAV) could achieve obstacle avoidance and formation control. In detail, UAV could move forward to any destination by controlling four airscrews on each side of UAV while two UGVs which are run by two wheels on each side would follow UAV as followers. In addition, UGVS could follow the UAV in certain formation, bypassing all obstacles on ground during moving process. After encountering an obstacle in moving, UGV could temporarily jump out of the formation shape and take bypassing this obstacle as the priority. If there are several obstacles on ground, the UGV could also find an optimal path to follow the UAV as the leader.
 +
 
 +
== Motivation ==
 +
 
 +
................
 +
 
 +
== System Structure ==
 +
The whole system consists of such major parts: UAV, UGV, Optitrack camera system and a computer connected to above systems through Wifi and a wireless modem to release Wifi signal as communication channel. All equipment is shown in Figure 1.1.
 +
[[File:system_170.jpg]]
 +
 +
Figure 1.1 Hardware of this system
 +
 
 +
The working mechanism of this system is, firstly, the OptiTrack camera system is used to detect surrounding environment, to identify UAV and UGV and to locate UAV and UGV which would run in such environment. Secondly, locations of these vehicles could be transmitted from the camera system to the controller in Simulink in a host computer, so that the controller could convert these input location data into spiral velocity of four airscrews of UAV and velocity of two wheels in UGV. Lastly, after receiving above velocity data, UAV could move to any expected destination while UGV can follow UAV in certain formation, bypassing any obstacle in motion. The illustration of how this project work is shown in Figure 1.2. The following sections would have a focus on path planning and obstacle avoidance and explore what method could be used to address this issue.
 +
== Kinematics of UGV  ==
 +
As described below, this project will involve unmanned ground vehicles to perform formation control and path planning with obstacle avoidance, which is my task to complete. Thus, it is necessary to explain the kinematics of unmanned ground vehicles and how to control them to reach desired position and to run on expected trajectory.
 +
Introduction of Unmanned ground robot
 +
In this project, one mature product will be used called as QBot 2, which is an innovative ground autonomous ground robot designed by Quanser. QBot 2 consists of a Yujin Robot Kobuki platform, a Microsoft Kinect RGB camera and depth sensor, and a Quanser DAQ with a wireless embedded computer [2]. The appearance of QBot2 is shown in Figure 3.1.
 +
 +
Figure 3.1
 +
 
 +
The appearance of Qbot 2
 +
A computer system is embedded to the vehicle, using the Gumstix DuoVero computer to run QUARC, Quanser's real-time control software, and interfacing with the QBot 2 data acquisition card (DAQ) [2]. The interface to the target computer is Matlab Simulink with QUARC. The QBot 2 has different modules to work: the Quanser Hardware-In-the-Loop module to receive signals from sensors and to output signals, the Quanser Stream API module to set up wireless communications, the Quanser Multimedia module to read RGB and depth image data from the Kinect sensor, and the MathWorks Computer Vision System toolbox to perform image processing. The diagram of this configuration is shown in Figure 3.2.
 +
 +
Figure 3.2 The configuration of this system
 +
 
 +
The QBot 2 uses a Kobuki mobile robot platform that is comprised of many signal ports and sensors to perform multiple functions. The functions of QBot 2 is shown in Figure 3.3.
 +
 +
Figure 3.3 The functions of QBot 2
 +
 
 +
Because in this project, the ground robot is mainly used to perform formation control and dynamic path planning with the vision position from another comprehensive camera system designed by Xiaoyuan Wang, many functions of Qbot 2 will not be introduced in detail. Importantly, QBot 2 is driven by two differential drive wheels with built-in encoders, which means that we can only change the velocity of two different wheels to make QBot 2 run as expected. The structure of QBot 2 with two wheels is shown in Figure 3.4.
 +
 +
Figure 3.4 The structure of QBot 2 with two wheels
 +
 
 +
Lastly, the specifications of QBot2 should be introduced, since the parameters here are much related to the following Kinematic section. For example, the maximum speed will determine the how fast QBot2 could reach one position; the diameter of the QBot2, especially the distance between two wheels will determine the trajectory of QBot2 with certain velocity on two wheels. The specifications of QBot 2 is shown in Table 3.1.
 +
 
 +
Table 3.1 QBot 2 specifications
 +
 
 +
 
 +
Kinematic of Unmanned ground robot
 +
As discussed before, the Quanser QBot 2 Mobile Platform is made up of  two central drive wheels and they are located on a common axis that bisects the ground robot. This drive configuration is called as differential drive which forms mathematical relationship of independent motion of two wheels to the overall movement of the robot chassis [1]. The motion of each wheel is detected by encoders, and the robot's orientation angle are estimated using the inertial gyro or could be calculated, which will be described in the following part. To understand the relationship between independent motion of the two wheels and the motion of the overall robot, the motion of the robot about a common point is modelled., which is shown below in Figure 3.5.
 +
 +
Figure 3.5 QBot 2 Mobile Platform Reference Frame Definitions
 +
 
 +
Let the radius of the wheels be denoted by r, and the wheel rotational speed be denoted by ω_L and ω_R for the left and right wheel respectively. The velocity of the two wheels on a 2-dimensiona space is described:
 +
 
 +
If the wheel has no slippage when running, the QBot 2 can move along the horizontal plane in a straight line or curved trajectory, or even spin on a spot, which means that the robot will definitely rotate around a certain point. Even when the velocity of two wheels are the same, leading the robot to move in a straight line, there is still this rotation point although it is at infinity [1]. This centre of rotation is named as the Instantaneous Center of Curvature (ICC). Let r_ICC be the distance between ICC and the centre of the robot chassis. If d is the distance between the left and right wheels, θ is the heading angle of the robot, and v_C is the (forward/backward) speed of the robot chassis centre, the motion of the QBot 2 chassis can be presented in the following equations:
 +
 
 +
 
 +
v_C, v_L and v_R are all defined along the same axis, lying in the forward/backward direction of the chassis. With the wheel speed, v_L and v_R, the robot speed, v_C, the angular rate, ω_C= θ ̇, and the distance from ICC, r_ICC, we can get the relationship between the motion of the wheels and the motion of the robot:
 +
 
 +
 
 +
In another situation, the inverse kinematics model calculates the required wheel speed to obtain a desired robot chassis speed v_C, and angular rate ω_C. Based on these two variables, the wheel speed v_L and v_R could be directly computed:
 +
 
 +
 
 +
The above kinematics model is derived in a local frame of reference. In other words, the chassis speed, v_C, is defined with respect to the robot itself and not in the global frame that could apply to the whole environment. Because the robot chassis’ orientation changes as the angular rate is not zero, ω_C= θ ̇, we need to transform kinematics model including all motion variables from the local reference system to the global reference system, to derive the robot motion in the global reference frame [1]. If a robot’s orientation angle is θ, the transformation could be described in the following matrix:
 +
 
 +
 
 +
A state vector, S, is defined including the position, x and y, and the orientation angle, θ. It could be presented in a vector:
 +
 
 +
 
 +
The x and y axes lie in the 2-dimension space where the robot moves. The heading, θ, is defined with respect to x axis, which means the heading is zero, (θ=0), when the robot chassis' forward direction aligns with x axis [1]. The differential of the states can be expressed using the robot chassis speed, v_C, and angular rate,
 +
 
 +
 
 +
Likewise, the position of the Instantaneous Centre of Curvature (ICC) in space, (x_ICC and y_ICC), could be expressed in the global reference frame:
 +
 
 +
Given the robot chassis state vector, S(t), and its rate of change, S ̇(t), expressed in the global reference frame, the robot pose at time, t, can be computed as follows:
 +
 
 +
For the QBot 2, given the wheel separation, d, the heading, θ, and the wheel speed, v_L and v_R, the equations of motion for odometric localization are expressed as follows:
 +
 
 +
To get the integration, we may need to introduce a small time step, δ(t), so the approximation of the integration is like a first order Talyor series expansion:
 +
 
 +
== Dynamic Path Planning ==
 +
 
 +
== The theory ==
 +
As discussed in the previous section, the artificial potential field is the method to perform path planning because of its elegant mathematical analysis and simplicity. 
 +
Artificial Potential Field
 +
The theory
 +
The Artificial Potential Field (APF) was firstly put forward by Khatib in 1986 [Khatib]. It generates a scalar field consisting of artificial ‘hills’ representing obstacles and ‘valleys’ representing attractors [9]. The negative gradient of the potential field creates proper repulsive and attractive forces to enable the robot to reach the destination with obstacle avoidance [9].  Specifically, there are two types of potential sources: attractive pole and repulsive pole. The area where system does not want to enter or obstacles are called as repulsive poles, while the target is called as attractive pole [4]. For the target attractive pole, the abstract force generated by the target is proportional to the relative distance to the target; for the repulsive pole, equipotential line and the repulsive field similar to the shape of the obstacle are established. The potential of the system is the sum of the potentials generated by the potential poles at the point, which is, the repulsive potential of the obstacles and the attractive potential of the target are superimposed, and its negative gradient direction stands for the direction of the abstract force imposed to the robot system [6]. It is this abstract force that leads the system to bypass the obstacle and move towards the target.
 +
 
 +
To explain in brief, the robot and the target are treated as a particle, and the obstacle or the threat area is considered as a circle [4]. The motion direction of the robot at any position in the space is determined by joined force field comprised of the attractive field generated by the target and the repulsive field caused by the obstacles, which is shown in Figure 4.1.
 +
 +
Figure 4.1 The force imposed to UGV in the model of artificial potential field
 +
The attractive potential field function can be described as [10]:
 +
U_att=0.5αρ^2 (X,X_g)
 +
 
 +
Where X=(x,y) is the position vector of the ground robot, α represents gain coefficient of the attractive force and X,X_g represents the location of the robot and the target respectively. ρ(X,X_g) stands for the relative distance between the robot and the target.
 +
 
 +
The attractive force of the attractive field towards the ground robot is the negative gradient of the attractive potential, which is:
 +
F_att=-∇U_att (X)=αρ(X,X_g)
 +
 
 +
Similarly, the repulsive potential field function could be written as:
 +
 
 +
 
 +
Where X=(x,y) is the position vector of the ground robot, β represents gain coefficient of the repulsive force and X,X_o represents the location of the robot and the obstacle respectively. ρ(X,X_o) stands for the relative distance between the robot and the obstacle, and ρ_0 is the distance threshold to decide if the repulsive force exists or not.
 +
 
 +
 
 +
Thus, the joined force of the ground robot is [5]:
 +
F_tot= F_att+F_rep
 +
 
 +
Under the joined effects of the virtual attractive field of the target point and the virtual repulsive field of the obstacle, the vehicle could successfully reach the destination, bypassing all obstacles in the process [13].
 +
 
 +
== Improved Theory text ==
 +
 
 +
However, there are serious problems about the equation to calculate the repulsive force in the artificial potential field. The first one is if the robot moves to the destination, the attractive force would be zero while the repulsive force still exists which could force the vehicle to keep moving away from the destination. In addition, in one extreme situation where one obstacle is very close to the destination, the repulsive force generated by this obstacle will be much larger than the attractive force generated by the destination, which will not enable the vehicle to approach to the destination. Thus, in order to avoid these two situations to occur, the repulsive force is adjusted by introducing the relative distance between the current location and the destination, i.e. multiplying the previous equation with 〖(X-X_g)〗^n , making the repulsive force zero at the destination [18].
 +
 
 +
The adjusted equation of the repulsive potential field [15] is:
 +
 
 +
Where n≥0.
 +
The repulsive force is:
 +
 
 +
Where
 +
 
 +
== Illustration of system blocks ==
 +
 
 +
Even though the simulation result has verified artificial potential filed could successfully achieve path planning and formation control in theory, real-field testing should also be done to check if this method could work well in practice. To check if UGVs could successfully move with formation control and obstacle avoidance in real environment, simulation blocks in Simulink to control real robots need to be built. This section is to explain the concept inside each block and how each block works.
 +
 
 +
All Simulink blocks of controllers for this project is shown in Figure 5.1, consisting of blocks for the core controller, position measurement, formation control, transmitting commands and saving data. The following part will illustrate each block successively.
 +
 +
Figure 5.1 Simulink blocks of controllers
 +
 
 +
== Core controller ==
 +
 
 +
The Simulink block of the core controllers is shown in Figure 5.2. The block is coding-based and the function of it is to output heading velocity and angular velocity of a leader and followers by processing the input of positions and heading angles of a leader and followers, positions of obstacles, and the detection signal to trigger formation control.
 +
 
 +
 +
Figure 5.2 The Simulink block of the core controller
 +
 
 +
In detail, by using artificial potential field, the controller can calculate joined force, consisting of attractive force and several repulsive force, of the leader and two followers by processing input variables, i.e. the leader’s positon and current heading angle, and two followers’ position and current heading angles. The attractive force F_att is easy to be computed, and the equation to calculate F_att  is shown in Section 4, while there may be several types of repulsive force to consider. The first type of repulsive force is generated by the obstacle nearby. As shown in the equation to compute repulsive force in Section 4, if the distance between the UGV and an obstacle is lower than the threshold distance P_0, the repulsive force comes up. The illustration of this category of repulsive force is shown in Figure 5.3.
 +
 +
Figure 5.3 Illustration of repulsive force caused by obstacles nearby
 +
 
 +
The second type of repulsive force is the repulsive force caused by surrounding UGV. Each UGV would consider other UGVs as obstacles and would avoid to clash with them. Similarly, only if the distance between one UGV and another surrounding UGV is lower than a threshold distance P_1, the repulsive force caused by surrounding UGV would appear. The illustration of this category of repulsive force is shown in Figure 5.4.
 +
 +
Figure 5.4 Illustration of repulsive force caused by another UGV nearby
 +
 
 +
Based on calculated joined force which is the sum of attractive force and all repulsive force, velocity of two wheels of the leader and two followers could then be got.  Specifically, with respect to the leader, the total force F_totcould be got by summing attractive force F_att created by the destination and repulsive force F_rep created by several obstacles. The equation to calculate F_tot is:
 +
F_tot= F_att+F_rep
 +
 
 +
It is assumed the heading velocity  v_C and angular velocity  ω  of the vehicle is proportional to  F_tot, which could be written as:
 +
 
 +
Where α is the proportional coefficient.
 +
In regard of angular velocity ω of UGV, PD control is applied. The equation to calculate ω is as follows:
 +
ω= K_p (〖 θ〗_tot-φ)+〖K_d (d(θ_tot-φ)/dt)+K〗_p (〖 φ〗_leader-〖 φ〗_follower )
 +
Where θ_tot is the angle of the joined force along with y axis, φ is the current heading angle of the vehicle along with y axis, K_p is the proportional coefficient in PD control and K_d is the differential coefficient in PD control [7].
 +
The input of this PD controller is the angle of joint force while the output of this PD controller is current orientation angle of the vehicle.  By using PD controller, the orientation angle of the vehicle could align with the angle of joint force [8]. The PD controller is shown in Figure 5.5.
 +
 +
Figure 5.5 PD controller
 +
 
 +
The reason to choose PD control is that PID control can be used to adjust the angular velocity of the vehicle, so that θ_tot is equal to φ, which means the heading direction of the vehicle aligns with the direction of joined force. In addition, PD control having the differential control could better minimize the error in advance.  The figure to demonstrate these variables to calculate v_C and ω is shown in Figure 5.6.
 +
 +
Figure 5.6 Demonstration of variables to calculate v_C and ω
 +
 
 +
From the kinematic equations:
 +
v_L=v_C- ωR
 +
v_R=v_C+ ωR
 +
the velocity of two wheels of the leader could be computed.
 +
The only difference for followers is that the final destination is not the one generating attractive force to followers while the actual destination to generate attractive force is the position of a virtual follower, which is decided by the current location of the leader and the formation shape. The equation to get location of virtual followers [8] is as follows:
 +
 
 +
Where (x_l,y_l) is the coordinate of the leader, (x_v,y_v) is the coordinate of one virtual follower, l  is the distance between the leader and the virtual follower,  φ_l is the heading angle of the leader, φ_v is the heading angle of the virtual follower, and ψ is angle between virtual follower and the leader. The illustration of this equation to find location of virtual follower is shown in Figure 5.7.
 +
 +
Figure 5.7 Illustration of how to calculate virtual destination
 +
All parameters used in the controller in shown in Table 5.1.
 +
Table 5.1 Parameters used in the controller
 +
k Attractive gain coefficient 5
 +
m Repulsive gain coefficient to have repulsive force by obstacles 7.5
 +
m1 Repulsive gain coefficient to compute repulsive force by UGVs 15
 +
P0 Threshold to determine if there is repulsive force due to obstacles 0.6m
 +
P1 Threshold to determine if there is repulsive force due to UGVs 0.3m
 +
α Coefficient to compute heading velocity v_C 0.01
 +
K_p Proportional coefficient to compute angular velocity ω 1.2
 +
K_d Differential coefficient to compute angular velocity ω 0.8
 +
l Distance between the leader and one follower 0.6m
 +
ψ Following angle between the leader and one follower π/3
 +
 
 +
== Detection module for formation control==
 +
The Simulink block to trigger formation change is shown in Figure 5.8. How this block works is firstly the OptiTrack camera system would detect if a certain sign, denoted as the signal to change formation, comes up and will have the signal of array_change to 1 if this certain sign is detected. Afterwards, the array_change signal and its delayed one by one sample will be considered as input of this logical chart which could detect the transition of the coming up of the certain sign. The output could be 0 or 1 iteratively based on the transition of the coming up of the certain sign, shown in Figure 5.9.
 +
 +
Figure 5.8 The Simulink block to trigger formation change
 +
 +
Figure 5.9 The inertial working mechanism to trigger formation control
 +
 
 +
== OptiTrack Measurement ==
 +
The block for OptiTrack Measurement is shown in Figure 5.10. The OptiTrack could detect the location and heading angles of all robots and then sends these data to this controller via this block. Afterwards, these data could be used as input of the core processor for further processing. Since this block is provided by Quanser, the detail of how it works will not be presented.
 +
 +
Figure 5.10 The block for OptiTrack Measurement
 +
 
 +
== OptiTrack Measurement ==
 +
The block to save data is shown in Figure 5.11. Using this block, all data including real-time location of UGVs, real-time yaw angle of UAV and UGV velocity data could be saved for further analysis or to find out the potential problem to trigger the failure of the experiment.
 +
 +
Figure 5.11 The block for saving data
 +
 
 +
== Kinematic model of UGV ==
 +
The block to send velocity value of two wheels of UGV is shown in Figure 5.12. This block needs to cooperate with kinematic blocks that enable the leader and followers to move after receiving wheel velocity, shown in Figure 5.13 and Figure 5.14 respectively for the leader and one follower. After receiving the velocity value of two wheels, the kinematic block could send these data to UGV, to enable them to move in desired trajectory.
 +
 +
Figure 5.12 The block for sending data to robots
 +
 +
Figure 5.13 The block enabling the leader to move
 +
 +
Figure 5.14 The block enabling one follower to move
 +
 
 +
The logic of this method is that, with respect to the leader, the algorithm firstly judges if there is any obstacle around the leader by comparing the distance between the leader and any obstacle if there is with the threshold distance and will enable the robot to keep heading to the destination if there is not any obstacle. Otherwise, the algorithm will increase the velocity of the left wheel if the obstacle is to the left of the leader and vice versa, to bypass this obstacle. After bypassing this obstacle successfully, the algorithm will resume to judge if there is any obstacle around. After much iteration, the leader could successfully achieve the destination, bypassing all obstacles. The flow chart of this mechanism [12] for leaders is shown in Figure 5.15.
 +
 +
Figure 5.15 The flow chart of the working mechanism of the leader
 +
What is different for the follower is that the destination for the follower is the location of a virtual follower with respect to the leader according the formation shape. Thus, for followers, the algorithm needs to judge if the virtual destination has been achieved after each iteration and bypass any obstacle as did for the leader. The flow chart of this mechanism for followers is shown in Figure 5.16.
 +
 +
Figure 5.16 The flow chart of the working mechanism of the follower
 +
 
 +
== Results and Analysis ==
 +
The figures of real testing with three UGVS for path planning and obstacle avoidance is shown in Appendix B where two followers could firstly follow the leader in triangle formation and jump out of this formation to bypass any obstacle on the path. To analyze the experiment result, the plots recording real-time location of each UGV, which corresponds to the testing, is shown in Figure 6.1. As shown in Figure 6.1, it can be seen that the leader is moving to the destination indicated as a green dot at (0.2,1.5) and the trajectory of the leader is denoted as red lines pointing towards the destination in along with Y coordinate. The left follower denoted by a blue line is always following the leader in certain formation while the right follower also follows the leader in certain formation but it jump out of this formation to bypass an obstacle indicated by a red dot.
 +
 +
Figure 6.1 The plot recording real time location of UGVs
 +
To further analyze this motion process of the leader and followers, the trajectory in Y coordinate and X coordinate to time is shown in Figure 6.2 and Figure 6.3 respectively.
 +
 
 +
 +
Figure 6.2 The trajectory in Y coordinate
 +
 +
Figure 6.3 The trajectory in X coordinate
 +
In Figure 6.2, the Y  coordinate of the leader is always larger than the Y coordinate of two followers, which corresponds to Figure 6.1 and figures of real testing in Appendix where the leader is the front of two followers, forming an upward triangle. The formation shape, namely an upward triangle, could be proved from Figure 6.2 that Y coordinate of follower1 and follower2 is the same at most time.  In Figure 6.3, the X coordinate of the leader and follower1 remain constant while the X coordinate of follower2 firstly drops significantly before rising to a new value. The reason of this phenomenon is there is an obstacle in the front of follower2, so follower2 has to turn sharply to bypass this obstacle, resulting in a change in X coordinate. This corresponds to Figure 6.1 and figures of real testing in Appendix B. Before t reaches 20s, follower1 move from the initial location to the location of its corresponding virtual follower to form formation shape. After t reaches 40s, an obstacle is detected in the front of follower1, so follower1 has to turn sharply to negative X axis to bypass this obstacle, which is why X coordinate of follower1 decreased significantly. After bypassing this obstacle successfully at t=70ms, follower1 will head back to the location of corresponding virtual follower to move in formation with the leader and follower2. Several samples of the plot recoding real-time location of UGVs are in Appendix C.

Latest revision as of 12:12, 6 November 2017

Introduction

This project is to build an autonomous heterogeneous multi-vehicle system where several unmanned ground robots (UGV) and unmanned aerial vehicle (UAV) could achieve obstacle avoidance and formation control. In detail, UAV could move forward to any destination by controlling four airscrews on each side of UAV while two UGVs which are run by two wheels on each side would follow UAV as followers. In addition, UGVS could follow the UAV in certain formation, bypassing all obstacles on ground during moving process. After encountering an obstacle in moving, UGV could temporarily jump out of the formation shape and take bypassing this obstacle as the priority. If there are several obstacles on ground, the UGV could also find an optimal path to follow the UAV as the leader.

Motivation

................

System Structure

The whole system consists of such major parts: UAV, UGV, Optitrack camera system and a computer connected to above systems through Wifi and a wireless modem to release Wifi signal as communication channel. All equipment is shown in Figure 1.1. System 170.jpg

Figure 1.1 Hardware of this system

The working mechanism of this system is, firstly, the OptiTrack camera system is used to detect surrounding environment, to identify UAV and UGV and to locate UAV and UGV which would run in such environment. Secondly, locations of these vehicles could be transmitted from the camera system to the controller in Simulink in a host computer, so that the controller could convert these input location data into spiral velocity of four airscrews of UAV and velocity of two wheels in UGV. Lastly, after receiving above velocity data, UAV could move to any expected destination while UGV can follow UAV in certain formation, bypassing any obstacle in motion. The illustration of how this project work is shown in Figure 1.2. The following sections would have a focus on path planning and obstacle avoidance and explore what method could be used to address this issue.

Kinematics of UGV

As described below, this project will involve unmanned ground vehicles to perform formation control and path planning with obstacle avoidance, which is my task to complete. Thus, it is necessary to explain the kinematics of unmanned ground vehicles and how to control them to reach desired position and to run on expected trajectory. Introduction of Unmanned ground robot In this project, one mature product will be used called as QBot 2, which is an innovative ground autonomous ground robot designed by Quanser. QBot 2 consists of a Yujin Robot Kobuki platform, a Microsoft Kinect RGB camera and depth sensor, and a Quanser DAQ with a wireless embedded computer [2]. The appearance of QBot2 is shown in Figure 3.1.

Figure 3.1

The appearance of Qbot 2 A computer system is embedded to the vehicle, using the Gumstix DuoVero computer to run QUARC, Quanser's real-time control software, and interfacing with the QBot 2 data acquisition card (DAQ) [2]. The interface to the target computer is Matlab Simulink with QUARC. The QBot 2 has different modules to work: the Quanser Hardware-In-the-Loop module to receive signals from sensors and to output signals, the Quanser Stream API module to set up wireless communications, the Quanser Multimedia module to read RGB and depth image data from the Kinect sensor, and the MathWorks Computer Vision System toolbox to perform image processing. The diagram of this configuration is shown in Figure 3.2.

Figure 3.2 The configuration of this system

The QBot 2 uses a Kobuki mobile robot platform that is comprised of many signal ports and sensors to perform multiple functions. The functions of QBot 2 is shown in Figure 3.3.

Figure 3.3 The functions of QBot 2

Because in this project, the ground robot is mainly used to perform formation control and dynamic path planning with the vision position from another comprehensive camera system designed by Xiaoyuan Wang, many functions of Qbot 2 will not be introduced in detail. Importantly, QBot 2 is driven by two differential drive wheels with built-in encoders, which means that we can only change the velocity of two different wheels to make QBot 2 run as expected. The structure of QBot 2 with two wheels is shown in Figure 3.4.

Figure 3.4 The structure of QBot 2 with two wheels

Lastly, the specifications of QBot2 should be introduced, since the parameters here are much related to the following Kinematic section. For example, the maximum speed will determine the how fast QBot2 could reach one position; the diameter of the QBot2, especially the distance between two wheels will determine the trajectory of QBot2 with certain velocity on two wheels. The specifications of QBot 2 is shown in Table 3.1.

Table 3.1 QBot 2 specifications


Kinematic of Unmanned ground robot As discussed before, the Quanser QBot 2 Mobile Platform is made up of two central drive wheels and they are located on a common axis that bisects the ground robot. This drive configuration is called as differential drive which forms mathematical relationship of independent motion of two wheels to the overall movement of the robot chassis [1]. The motion of each wheel is detected by encoders, and the robot's orientation angle are estimated using the inertial gyro or could be calculated, which will be described in the following part. To understand the relationship between independent motion of the two wheels and the motion of the overall robot, the motion of the robot about a common point is modelled., which is shown below in Figure 3.5.

Figure 3.5 QBot 2 Mobile Platform Reference Frame Definitions

Let the radius of the wheels be denoted by r, and the wheel rotational speed be denoted by ω_L and ω_R for the left and right wheel respectively. The velocity of the two wheels on a 2-dimensiona space is described:

If the wheel has no slippage when running, the QBot 2 can move along the horizontal plane in a straight line or curved trajectory, or even spin on a spot, which means that the robot will definitely rotate around a certain point. Even when the velocity of two wheels are the same, leading the robot to move in a straight line, there is still this rotation point although it is at infinity [1]. This centre of rotation is named as the Instantaneous Center of Curvature (ICC). Let r_ICC be the distance between ICC and the centre of the robot chassis. If d is the distance between the left and right wheels, θ is the heading angle of the robot, and v_C is the (forward/backward) speed of the robot chassis centre, the motion of the QBot 2 chassis can be presented in the following equations:


v_C, v_L and v_R are all defined along the same axis, lying in the forward/backward direction of the chassis. With the wheel speed, v_L and v_R, the robot speed, v_C, the angular rate, ω_C= θ ̇, and the distance from ICC, r_ICC, we can get the relationship between the motion of the wheels and the motion of the robot:


In another situation, the inverse kinematics model calculates the required wheel speed to obtain a desired robot chassis speed v_C, and angular rate ω_C. Based on these two variables, the wheel speed v_L and v_R could be directly computed:


The above kinematics model is derived in a local frame of reference. In other words, the chassis speed, v_C, is defined with respect to the robot itself and not in the global frame that could apply to the whole environment. Because the robot chassis’ orientation changes as the angular rate is not zero, ω_C= θ ̇, we need to transform kinematics model including all motion variables from the local reference system to the global reference system, to derive the robot motion in the global reference frame [1]. If a robot’s orientation angle is θ, the transformation could be described in the following matrix:


A state vector, S, is defined including the position, x and y, and the orientation angle, θ. It could be presented in a vector:


The x and y axes lie in the 2-dimension space where the robot moves. The heading, θ, is defined with respect to x axis, which means the heading is zero, (θ=0), when the robot chassis' forward direction aligns with x axis [1]. The differential of the states can be expressed using the robot chassis speed, v_C, and angular rate,


Likewise, the position of the Instantaneous Centre of Curvature (ICC) in space, (x_ICC and y_ICC), could be expressed in the global reference frame:

Given the robot chassis state vector, S(t), and its rate of change, S ̇(t), expressed in the global reference frame, the robot pose at time, t, can be computed as follows:

For the QBot 2, given the wheel separation, d, the heading, θ, and the wheel speed, v_L and v_R, the equations of motion for odometric localization are expressed as follows:

To get the integration, we may need to introduce a small time step, δ(t), so the approximation of the integration is like a first order Talyor series expansion:

Dynamic Path Planning

The theory

As discussed in the previous section, the artificial potential field is the method to perform path planning because of its elegant mathematical analysis and simplicity. Artificial Potential Field The theory The Artificial Potential Field (APF) was firstly put forward by Khatib in 1986 [Khatib]. It generates a scalar field consisting of artificial ‘hills’ representing obstacles and ‘valleys’ representing attractors [9]. The negative gradient of the potential field creates proper repulsive and attractive forces to enable the robot to reach the destination with obstacle avoidance [9]. Specifically, there are two types of potential sources: attractive pole and repulsive pole. The area where system does not want to enter or obstacles are called as repulsive poles, while the target is called as attractive pole [4]. For the target attractive pole, the abstract force generated by the target is proportional to the relative distance to the target; for the repulsive pole, equipotential line and the repulsive field similar to the shape of the obstacle are established. The potential of the system is the sum of the potentials generated by the potential poles at the point, which is, the repulsive potential of the obstacles and the attractive potential of the target are superimposed, and its negative gradient direction stands for the direction of the abstract force imposed to the robot system [6]. It is this abstract force that leads the system to bypass the obstacle and move towards the target.

To explain in brief, the robot and the target are treated as a particle, and the obstacle or the threat area is considered as a circle [4]. The motion direction of the robot at any position in the space is determined by joined force field comprised of the attractive field generated by the target and the repulsive field caused by the obstacles, which is shown in Figure 4.1.

Figure 4.1 The force imposed to UGV in the model of artificial potential field The attractive potential field function can be described as [10]: U_att=0.5αρ^2 (X,X_g)

Where X=(x,y) is the position vector of the ground robot, α represents gain coefficient of the attractive force and X,X_g represents the location of the robot and the target respectively. ρ(X,X_g) stands for the relative distance between the robot and the target.

The attractive force of the attractive field towards the ground robot is the negative gradient of the attractive potential, which is: F_att=-∇U_att (X)=αρ(X,X_g)

Similarly, the repulsive potential field function could be written as:


Where X=(x,y) is the position vector of the ground robot, β represents gain coefficient of the repulsive force and X,X_o represents the location of the robot and the obstacle respectively. ρ(X,X_o) stands for the relative distance between the robot and the obstacle, and ρ_0 is the distance threshold to decide if the repulsive force exists or not.


Thus, the joined force of the ground robot is [5]: F_tot= F_att+F_rep

Under the joined effects of the virtual attractive field of the target point and the virtual repulsive field of the obstacle, the vehicle could successfully reach the destination, bypassing all obstacles in the process [13].

Improved Theory text

However, there are serious problems about the equation to calculate the repulsive force in the artificial potential field. The first one is if the robot moves to the destination, the attractive force would be zero while the repulsive force still exists which could force the vehicle to keep moving away from the destination. In addition, in one extreme situation where one obstacle is very close to the destination, the repulsive force generated by this obstacle will be much larger than the attractive force generated by the destination, which will not enable the vehicle to approach to the destination. Thus, in order to avoid these two situations to occur, the repulsive force is adjusted by introducing the relative distance between the current location and the destination, i.e. multiplying the previous equation with 〖(X-X_g)〗^n , making the repulsive force zero at the destination [18].

The adjusted equation of the repulsive potential field [15] is:

Where n≥0. The repulsive force is:

Where

Illustration of system blocks

Even though the simulation result has verified artificial potential filed could successfully achieve path planning and formation control in theory, real-field testing should also be done to check if this method could work well in practice. To check if UGVs could successfully move with formation control and obstacle avoidance in real environment, simulation blocks in Simulink to control real robots need to be built. This section is to explain the concept inside each block and how each block works.

All Simulink blocks of controllers for this project is shown in Figure 5.1, consisting of blocks for the core controller, position measurement, formation control, transmitting commands and saving data. The following part will illustrate each block successively.

Figure 5.1 Simulink blocks of controllers

Core controller

The Simulink block of the core controllers is shown in Figure 5.2. The block is coding-based and the function of it is to output heading velocity and angular velocity of a leader and followers by processing the input of positions and heading angles of a leader and followers, positions of obstacles, and the detection signal to trigger formation control.


Figure 5.2 The Simulink block of the core controller

In detail, by using artificial potential field, the controller can calculate joined force, consisting of attractive force and several repulsive force, of the leader and two followers by processing input variables, i.e. the leader’s positon and current heading angle, and two followers’ position and current heading angles. The attractive force F_att is easy to be computed, and the equation to calculate F_att is shown in Section 4, while there may be several types of repulsive force to consider. The first type of repulsive force is generated by the obstacle nearby. As shown in the equation to compute repulsive force in Section 4, if the distance between the UGV and an obstacle is lower than the threshold distance P_0, the repulsive force comes up. The illustration of this category of repulsive force is shown in Figure 5.3.

Figure 5.3 Illustration of repulsive force caused by obstacles nearby

The second type of repulsive force is the repulsive force caused by surrounding UGV. Each UGV would consider other UGVs as obstacles and would avoid to clash with them. Similarly, only if the distance between one UGV and another surrounding UGV is lower than a threshold distance P_1, the repulsive force caused by surrounding UGV would appear. The illustration of this category of repulsive force is shown in Figure 5.4.

Figure 5.4 Illustration of repulsive force caused by another UGV nearby

Based on calculated joined force which is the sum of attractive force and all repulsive force, velocity of two wheels of the leader and two followers could then be got. Specifically, with respect to the leader, the total force F_totcould be got by summing attractive force F_att created by the destination and repulsive force F_rep created by several obstacles. The equation to calculate F_tot is: F_tot= F_att+F_rep

It is assumed the heading velocity v_C and angular velocity ω of the vehicle is proportional to F_tot, which could be written as:

Where α is the proportional coefficient. In regard of angular velocity ω of UGV, PD control is applied. The equation to calculate ω is as follows: ω= K_p (〖 θ〗_tot-φ)+〖K_d (d(θ_tot-φ)/dt)+K〗_p (〖 φ〗_leader-〖 φ〗_follower ) Where θ_tot is the angle of the joined force along with y axis, φ is the current heading angle of the vehicle along with y axis, K_p is the proportional coefficient in PD control and K_d is the differential coefficient in PD control [7]. The input of this PD controller is the angle of joint force while the output of this PD controller is current orientation angle of the vehicle. By using PD controller, the orientation angle of the vehicle could align with the angle of joint force [8]. The PD controller is shown in Figure 5.5.

Figure 5.5 PD controller

The reason to choose PD control is that PID control can be used to adjust the angular velocity of the vehicle, so that θ_tot is equal to φ, which means the heading direction of the vehicle aligns with the direction of joined force. In addition, PD control having the differential control could better minimize the error in advance. The figure to demonstrate these variables to calculate v_C and ω is shown in Figure 5.6.

Figure 5.6 Demonstration of variables to calculate v_C and ω

From the kinematic equations: v_L=v_C- ωR v_R=v_C+ ωR the velocity of two wheels of the leader could be computed. The only difference for followers is that the final destination is not the one generating attractive force to followers while the actual destination to generate attractive force is the position of a virtual follower, which is decided by the current location of the leader and the formation shape. The equation to get location of virtual followers [8] is as follows:

Where (x_l,y_l) is the coordinate of the leader, (x_v,y_v) is the coordinate of one virtual follower, l is the distance between the leader and the virtual follower, φ_l is the heading angle of the leader, φ_v is the heading angle of the virtual follower, and ψ is angle between virtual follower and the leader. The illustration of this equation to find location of virtual follower is shown in Figure 5.7.

Figure 5.7 Illustration of how to calculate virtual destination All parameters used in the controller in shown in Table 5.1. Table 5.1 Parameters used in the controller k Attractive gain coefficient 5 m Repulsive gain coefficient to have repulsive force by obstacles 7.5 m1 Repulsive gain coefficient to compute repulsive force by UGVs 15 P0 Threshold to determine if there is repulsive force due to obstacles 0.6m P1 Threshold to determine if there is repulsive force due to UGVs 0.3m α Coefficient to compute heading velocity v_C 0.01 K_p Proportional coefficient to compute angular velocity ω 1.2 K_d Differential coefficient to compute angular velocity ω 0.8 l Distance between the leader and one follower 0.6m ψ Following angle between the leader and one follower π/3

Detection module for formation control

The Simulink block to trigger formation change is shown in Figure 5.8. How this block works is firstly the OptiTrack camera system would detect if a certain sign, denoted as the signal to change formation, comes up and will have the signal of array_change to 1 if this certain sign is detected. Afterwards, the array_change signal and its delayed one by one sample will be considered as input of this logical chart which could detect the transition of the coming up of the certain sign. The output could be 0 or 1 iteratively based on the transition of the coming up of the certain sign, shown in Figure 5.9.

Figure 5.8 The Simulink block to trigger formation change

Figure 5.9 The inertial working mechanism to trigger formation control

OptiTrack Measurement

The block for OptiTrack Measurement is shown in Figure 5.10. The OptiTrack could detect the location and heading angles of all robots and then sends these data to this controller via this block. Afterwards, these data could be used as input of the core processor for further processing. Since this block is provided by Quanser, the detail of how it works will not be presented.

Figure 5.10 The block for OptiTrack Measurement

OptiTrack Measurement

The block to save data is shown in Figure 5.11. Using this block, all data including real-time location of UGVs, real-time yaw angle of UAV and UGV velocity data could be saved for further analysis or to find out the potential problem to trigger the failure of the experiment.

Figure 5.11 The block for saving data

Kinematic model of UGV

The block to send velocity value of two wheels of UGV is shown in Figure 5.12. This block needs to cooperate with kinematic blocks that enable the leader and followers to move after receiving wheel velocity, shown in Figure 5.13 and Figure 5.14 respectively for the leader and one follower. After receiving the velocity value of two wheels, the kinematic block could send these data to UGV, to enable them to move in desired trajectory.

Figure 5.12 The block for sending data to robots

Figure 5.13 The block enabling the leader to move

Figure 5.14 The block enabling one follower to move

The logic of this method is that, with respect to the leader, the algorithm firstly judges if there is any obstacle around the leader by comparing the distance between the leader and any obstacle if there is with the threshold distance and will enable the robot to keep heading to the destination if there is not any obstacle. Otherwise, the algorithm will increase the velocity of the left wheel if the obstacle is to the left of the leader and vice versa, to bypass this obstacle. After bypassing this obstacle successfully, the algorithm will resume to judge if there is any obstacle around. After much iteration, the leader could successfully achieve the destination, bypassing all obstacles. The flow chart of this mechanism [12] for leaders is shown in Figure 5.15.

Figure 5.15 The flow chart of the working mechanism of the leader What is different for the follower is that the destination for the follower is the location of a virtual follower with respect to the leader according the formation shape. Thus, for followers, the algorithm needs to judge if the virtual destination has been achieved after each iteration and bypass any obstacle as did for the leader. The flow chart of this mechanism for followers is shown in Figure 5.16.

Figure 5.16 The flow chart of the working mechanism of the follower

Results and Analysis

The figures of real testing with three UGVS for path planning and obstacle avoidance is shown in Appendix B where two followers could firstly follow the leader in triangle formation and jump out of this formation to bypass any obstacle on the path. To analyze the experiment result, the plots recording real-time location of each UGV, which corresponds to the testing, is shown in Figure 6.1. As shown in Figure 6.1, it can be seen that the leader is moving to the destination indicated as a green dot at (0.2,1.5) and the trajectory of the leader is denoted as red lines pointing towards the destination in along with Y coordinate. The left follower denoted by a blue line is always following the leader in certain formation while the right follower also follows the leader in certain formation but it jump out of this formation to bypass an obstacle indicated by a red dot.

Figure 6.1 The plot recording real time location of UGVs To further analyze this motion process of the leader and followers, the trajectory in Y coordinate and X coordinate to time is shown in Figure 6.2 and Figure 6.3 respectively.


Figure 6.2 The trajectory in Y coordinate

Figure 6.3 The trajectory in X coordinate In Figure 6.2, the Y coordinate of the leader is always larger than the Y coordinate of two followers, which corresponds to Figure 6.1 and figures of real testing in Appendix where the leader is the front of two followers, forming an upward triangle. The formation shape, namely an upward triangle, could be proved from Figure 6.2 that Y coordinate of follower1 and follower2 is the same at most time. In Figure 6.3, the X coordinate of the leader and follower1 remain constant while the X coordinate of follower2 firstly drops significantly before rising to a new value. The reason of this phenomenon is there is an obstacle in the front of follower2, so follower2 has to turn sharply to bypass this obstacle, resulting in a change in X coordinate. This corresponds to Figure 6.1 and figures of real testing in Appendix B. Before t reaches 20s, follower1 move from the initial location to the location of its corresponding virtual follower to form formation shape. After t reaches 40s, an obstacle is detected in the front of follower1, so follower1 has to turn sharply to negative X axis to bypass this obstacle, which is why X coordinate of follower1 decreased significantly. After bypassing this obstacle successfully at t=70ms, follower1 will head back to the location of corresponding virtual follower to move in formation with the leader and follower2. Several samples of the plot recoding real-time location of UGVs are in Appendix C.