Projects:2019s2-23301 Robust Formation Control for Multi-Vehicle Systems
Formation control has been widely used in the control of robots, sicne it can improve the overall efficency of the system. In this project, we aim to design a robust formation control for multi-vehicle system, in which the system can deal with at least one network problem or physical failure.
Contents
Introduction
Formation control of multi-agent systems (MASs) has been widely used for cooperative tasks in such applications as terrain exploration, mobile networks and traffic control. However, the communication-induced problems and the high failure risk of increasing equipments has created a number of challenges for the security of MASs. The objective of this project is to design a robust formation control strategy for a multi-vehicle system against communication and physical failures (e.g., network attacks, link failures, packet dropouts, sensor/actuator faults).
The vehicles are designed to detect the local environments by visual navigation and achieve a self-organisation formation. The robust fault-tolerant control strategy is investigated to deal with at leas one network problem or physical failure. The effectiveness of the formation control strategy and its robustness should be verified by both simulations and experiments. Potential applications are in large flexibility MASs and high-security Cyber-Physical Systems. Currently, our lab is equipped with a multi-vehicle platform, consisting of quadrotors, ground robots and camera location systems. Algorithms are developed by either Matlab Code or C language. MATLAB, Simulink, OpenGL, Motive and Visual Studio are possible softwares to be chosen for this project.
Project team
Student members
- Abdul Rahim Mohammad
- Jie Yang
- Kamalpreet Singh
- Zirui Xie
Supervisors
- Prof. Peng Shi
- Prof. Cheng-Chew Lim
Advisors
- Xin Yuan
- Yuan Sun
- Yang Fei
- Zhi Lian
Objectives
Design a robust formation control approach for multi-vehicle system to achieve:
- Self-decision making
- Environment detection
- Communication
- Obstacle avoidance
- Tolerance to physical or network problem
Background
Autonomous Control System
Autonomous control system has the power and ability for self-governance in the performance of control functions. They’re composed of a collection of hardware and software, which can perform the necessary control functions without intervention, or over extended time periods. There’re several degrees of autonomy. Conventional fixed controllers can be considered to have a restricted class of plant parameter variations and disturbances, while in a high degree of autonomy, controller must be able to perform a number of functions beyond conventional functions, such as regulation or tracking.
Agent
For the most part, we are happy to accept computers as obedient, literal, unimaginative servants. For many applications, it is entirely acceptable. However, for an increasingly large number of applications, we require systems that can decide for themselves what they need to do in order to achieve the objectives that we delegate to them. Such computer systems are known as agents. Agents that must operate robustly in rapidly changing, unpredictable, or open environments, where there is a significant possibility that actions can fail, are known as intelligent agents, or sometimes autonomous agents.
Multi Agent System
A group of loosely connected autonomous agents act in an environment to achieve a common goal. This is done by cooperating, and sharing knowledge with each other Multi-agent systems have been widely adopted in many application domains because of the beneficial advantages offered. Some of the benefits available by using MAS technology in large systems are
- An increase in the speed and efficiency of the operation due to parallel computation and asynchronous operation
- A graceful degradation of the system when one or more of the agents fail. It thereby increases the reliability and robustness of the system
- Scalability and flexibility- Agents can be added as and when necessary
- Reduced cost- This is because individual agents cost much less than a centralized architecture
- Reusability-Agents have a modular structure and they can be easily replaced in other systems or be upgraded more easily than a monolithic system
Formation Control
Formation of a multi-agent system is a composed of a group of specific agents, communication among agents, and geometrical information of agents. This project focuses on the formation control of the multi-vehicle system. The aim of formation control is to design a controller that has the ability to bring the agents to a desired geometric shape by assigning local control laws to individual agents.
Methodology
Detection
- Detect objects in immediate surroundings using LiDAR for distance measurement and monovision sensor for object classification
- Use angular rotation from IMUs and distance to object as polar coordinates for localisation
- Calculate global coordinate points of objects based on polar coordinates
- Pass object data to Agent unit for formation control
The detection module is used to detect localised environmental information using its many sensors such as vision sensor, SONAR Ultrasonic sensor, LiDAR ToF (Time-of-Flight) sensor and IMU. The detection module achieves sensing through heterogeneous acquisition mechanism also referred to as multimodal sensing. Multimodal sensing is necessary because it allows us to capture environmental information more vividly than using single sensor based ranging. Data gathered from various sensors is fused together within the agent through the process called Data fusion which allows multimodal data streams to jointly analyze captured sensing data. LiDAR and SONAR sensors will provide ranging information for object detection during the hypothesis generation and the vision sensor will provide the system with object’s shape parameters for hypothesis verification phase.
The detection module uses ultrasonic sensors for their wide cone of detection and LiDAR for its accuracy to sweep the immediate area, this allows the rovers to mark any obstacles within its path. This is in contrast to using multiple sensors for ranging which increases computational needs and power consumption on low energy hardware where the rovers will be ultimately planned to be deployed.
Detection occurs in two phases, first of which is the hypothesis generation phase. Hypothesis generation phase is when the ranging sensors detect an obstacle within their detection zone and consider the hypotheses that the obstacle is a fellow agent. The rover will then focus its efforts towards identifying the obstacle through the use of vision sensor. The second phase, hypothesis verification phase will use the vision sensor to verify our hypotheses. The vision sensor will extract frames through the real time feed and passes these frames to our YOLOv3 model that has been pretrained with a classifier. The classifier is trained using 800 images manually labelled to train the classifier. The model was trained for 900992 images with an average loss ratio of 0.016559. To ensure robustness within the system and to optimize the detection rate and computational needs, the vision sensor only captures frames every 100 milliseconds and process these frames through the object detection model.
The distance data is filtered through Alpha trimmed mean filter and a Kalman filter before being packed from the main function to support functions indicated with “Filtered Distance Data” in the Fig 2. Orientation data is filtered through complementary filter and sent from the main function to support functions for the hypothesis generation phase. Hypothesis is verified in the Hypothesis verification phase when data from the vision sensor returns a state value which is “0” for no rover detection and “1” for any rover detected. The data is then fused together and passed to the support function that calculates rectangular coordinates using the polar coordinates from distance for radius and angle from IMU sensor. The rover is now localized within its frame and this value is passed over to support functions. Support functions calculate the global frame using the rover’s coordinates as a reference point. Adding the local coordinates to the self-reference point allows us to locate rovers in the global frame.
The following filters are necessary for robust performance of the detection module, they also provide a level of fault tolerance to the system.
Kalman Filter
Kalman filter is a recursive algorithm that provides the optimal estimate of the system state, given the noise in the system. It tries to achieve the optimal estimation of the unknown configuration based on the previous estimate and new measurement data. The filter updates at fixed time intervals specified in the model. Kalman filter is used as a fault tolerant system that will estimate the optimum distance to the object even if the LiDAR and SONAR ranging sensors provide faulty ranging data. The filter will approach the optimal estimate with increase in iterations of the filter.
Complementary Filter
Complementary filter is a recursive algorithm that provides the optimal estimate of the gyration data. It tries to achieve the optimal estimation of the angular position and gyration based on the previous estimate and a combination of low pass and high pass filters. Complementary filter is used to take slow moving signals from accelerometer and fast-moving signals from a gyroscope and combine them as sensor fusion. Data from accelerometer and gyroscope are enough to estimate angular positions as such there are the only inputs to the Complementary filter. Data from the gyroscope is used because it is very precise and not susceptible to external forces. Data from the accelerometer is used as it is less susceptible to noise due to drift. The filter updates these measurements at fixed time intervals specified in the model considering previous measurements while estimating new data.
Alpha Trimmed Mean Filter
Alpha trimmed mean filters are a variation of adaptive mean filters that are especially effective against additive gaussian noise. The filter is an adaptation from image processing used to remove impulse noise as they are a good compromise between moving mean filter and median filter. They are used if the nature of noise in the system deviates from the gaussian with impulsive noise components. The filter is a moving window-based filter which iterates over the continuous stream of data from ranging sensors over a given window size of w and trimming factor of α. Adaptive alpha trimmed filters allow for the flexibility of choosing α based on the local noise statistics.
Formation Control
Results
Detection Results
Local and Global Coordinates
The system is capable of calculating local and global coordinates based on the ranging data provided by LiDAR sensor. Local coordinates are based on the local frame where the center of the rover on which the sensors are mounted, this is referred to as self-reference. The local frame assumes the center of the coordinate plane (0,0) as the center of the rover and the distance of the object detected as input to support functions. The support functions transform the polar coordinates to rectangular coordinates LiDAR’s distance(r) and IMU rotation(θ). Global coordinates are calculated by using the position of the current rover in simulation space accessed through the CoppeliaSim’s Remote API functions. The local frame is then concatenated to the rover position in the rectangular coordinates. Fig 1 shows the local coordinates and global coordinates displayed in the output of object detection.
Actual Distance (m) | LiDAR Distance | Kalman Filtered(1st iter) | Kalman Filtered (3rd iter) |
---|---|---|---|
1.650 | 1.649 | 1.669 | 1.651 |
1.154 | 1.154 | 1.171 | 1.158 |
1.018 | 1.017 | 1.030 | 1.022 |
1.652 | 1.649 | 1.670 | 1.653 |
Although the LiDAR sensor is accurate in ranging operations, it becomes necessary to filter out noise within the system introduced over time either due to the rover’s own movement or environmental conditions. Kalman filter is used as a fault tolerant system that will estimate the optimum distance to the object even if the LiDAR and SONAR ranging sensors fail or provide faulty estimates. Kalman filter works by estimating Kalman gain which is derived from the incoming sensor data like LiDAR ranging data, and prediction-based estimates from the mathematical model of the system designed beforehand. Every iteration of the Kalman filter increases the accuracy of the estimates giving robust ranging data even under unsuitable environmental conditions or faulty sensor data. Kalman filter, however, does not account for the complete sensor failure as it relies on incoming data to estimate the distance. Fault tolerance in that case is achieved by using the alternate ranging sensor such as SONAR or in cases when both sensors fail, active triangulation can be fed as an input to the filter. The Kalman estimates are consistently 3-10 mm higher than the actual distance, this is due to imperfect modelling of the system in the filter, some calibration of the model is required before the filter can be used for maximum accuracy.
Actual X(in meters) | Actual Y | Localised X | Localised Y |
---|---|---|---|
1.90 | 1.05 | 1.40 | 1.10 |
1.34 | 1.03 | 1.66 | 1.07 |
1.07 | 2.10 | 1.069 | 1.99 |
3.99 | 2.01 | 3.76 | 2.13 |
As can be seen from Fig 3 and the table, there is some discrepancy in the accuracy of the localized coordinate frames. This is due to the constant rotation of the rover that changes the IMU’s rotation data that in turn affects the polar to rectangular coordinate transformation. This error can be mitigated to by using only the first measured distance (first detection values) as the only measurement to consider as the accurate measurement. The error in the distance is an average of 0.265m i.e. 26mm. This error is due to the nature of detection at the point of contact of LiDAR instead of the center of the rover as the actual coordinates denote. This error can be compensated by adding the half width of the rover as calibration data to the distance measured.
Conclusion
Reference
[1] Wooldridge, M (2002). An Introduction to MultiAgent Systems. John Wiley & Sons. ISBN 978-0-471-49691-5
[2] Balaji, P., & Srinivasan, D. (2010). An introduction to multi-agent systems. Studies in Computational Intelligence, 310, 1-27.
[3] Hong-Jun M., & Guang-Hong Y. (2016). Adaptive Fault Tolerant Control of Cooperative Heterogeneous Systems With Actuator Faults and Unreliable Interconnections. IEEE Transactions on Automatic Control, 61(11), 3240-3255.
[4] Oh k, Park M, & Ahn H. (2015). A survey of multi-agent formation control. Automatica, 53, 424-440.
[5] Khatib, O. (1986). Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. The International Journal of Robotics Research, 5(1), 90–98. https://doi.org/10.1177/027836498600500106