Projects:2019s2-23301 Robust Formation Control for Multi-Vehicle Systems

From Projects
Revision as of 22:42, 7 June 2020 by A1739623 (talk | contribs) (Reference)
Jump to: navigation, search

Formation control has been widely used in the control of robots, sicne it can improve the overall efficency of the system. In this project, we aim to design a robust formation control for multi-vehicle system, in which the system can deal with at least one network problem or physical failure.

Introduction

Formation control of multi-agent systems (MASs) has been widely used for cooperative tasks in such applications as terrain exploration, mobile networks and traffic control. However, the communication-induced problems and the high failure risk of increasing equipments has created a number of challenges for the security of MASs. The objective of this project is to design a robust formation control strategy for a multi-vehicle system against communication and physical failures (e.g., network attacks, link failures, packet dropouts, sensor/actuator faults).

The vehicles are designed to detect the local environments by visual navigation and achieve a self-organisation formation. The robust fault-tolerant control strategy is investigated to deal with at leas one network problem or physical failure. The effectiveness of the formation control strategy and its robustness should be verified by both simulations and experiments. Potential applications are in large flexibility MASs and high-security Cyber-Physical Systems. Currently, our lab is equipped with a multi-vehicle platform, consisting of quadrotors, ground robots and camera location systems. Algorithms are developed by either Matlab Code or C language. MATLAB, Simulink, OpenGL, Motive and Visual Studio are possible softwares to be chosen for this project.

Project team

Student members

  • Abdul Rahim Mohammad
  • Jie Yang
  • Kamalpreet Singh
  • Zirui Xie

Supervisors

  • Prof. Peng Shi
  • Prof. Cheng-Chew Lim

Advisors

  • Xin Yuan
  • Yuan Sun
  • Yang Fei
  • Zhi Lian

Objectives

Design a robust formation control approach for multi-vehicle system to achieve:

  • Self-decision making
  • Environment detection
  • Communication
  • Obstacle avoidance
  • Tolerance to physical or network problem

Background

Autonomous Control System

Autonomous control system has the power and ability for self-governance in the performance of control functions. They’re composed of a collection of hardware and software, which can perform the necessary control functions without intervention, or over extended time periods. There’re several degrees of autonomy. Conventional fixed controllers can be considered to have a restricted class of plant parameter variations and disturbances, while in a high degree of autonomy, controller must be able to perform a number of functions beyond conventional functions, such as regulation or tracking.

Agent

For the most part, we are happy to accept computers as obedient, literal, unimaginative servants. For many applications, it is entirely acceptable. However, for an increasingly large number of applications, we require systems that can decide for themselves what they need to do in order to achieve the objectives that we delegate to them. Such computer systems are known as agents. Agents that must operate robustly in rapidly changing, unpredictable, or open environments, where there is a significant possibility that actions can fail, are known as intelligent agents, or sometimes autonomous agents.

Multi Agent System

A group of loosely connected autonomous agents act in an environment to achieve a common goal. This is done by cooperating, and sharing knowledge with each other Multi-agent systems have been widely adopted in many application domains because of the beneficial advantages offered. Some of the benefits available by using MAS technology in large systems are

  • An increase in the speed and efficiency of the operation due to parallel computation and asynchronous operation
  • A graceful degradation of the system when one or more of the agents fail. It thereby increases the reliability and robustness of the system
  • Scalability and flexibility- Agents can be added as and when necessary
  • Reduced cost- This is because individual agents cost much less than a centralized architecture
  • Reusability-Agents have a modular structure and they can be easily replaced in other systems or be upgraded more easily than a monolithic system

Formation Control

Formation of a multi-agent system is a composed of a group of specific agents, communication among agents, and geometrical information of agents. This project focuses on the formation control of the multi-vehicle system. The aim of formation control is to design a controller that has the ability to bring the agents to a desired geometric shape by assigning local control laws to individual agents.

Methodology

Detection

  1. Detect objects in immediate surroundings using LiDAR for distance measurement and monovision sensor for object classification
  2. Use angular rotation from IMUs and distance to object as polar coordinates for localisation
  3. Calculate global coordinate points of objects based on polar coordinates
  4. Pass object data to Agent unit for formation control


The detection module is used to detect localised environmental information using its many sensors such as vision sensor, SONAR Ultrasonic sensor, LiDAR ToF (Time-of-Flight) sensor and IMU. The detection module achieves sensing through heterogeneous acquisition mechanism also referred to as multimodal sensing. Multimodal sensing is necessary because it allows us to capture environmental information more vividly than using single sensor based ranging. Data gathered from various sensors is fused together within the agent through the process called Data fusion which allows multimodal data streams to jointly analyze captured sensing data. LiDAR and SONAR sensors will provide ranging information for object detection during the hypothesis generation and the vision sensor will provide the system with object’s shape parameters for hypothesis verification phase. The detection module uses ultrasonic sensors for their wide cone of detection and LiDAR for its accuracy to sweep the immediate area, this allows the rovers to mark any obstacles within its path. This is in contrast to using multiple sensors for ranging which increases computational needs and power consumption on low energy hardware where the rovers will be ultimately planned to be deployed. Detection occurs in two phases, first of which is the hypothesis generation phase. Hypothesis generation phase is when the ranging sensors detect an obstacle within their detection zone and consider the hypotheses that the obstacle is a fellow agent. The rover will then focus its efforts towards identifying the obstacle through the use of vision sensor. The second phase, hypothesis verification phase will use the vision sensor to verify our hypotheses. The vision sensor will extract frames through the real time feed and passes these frames to our YOLOv3 model that has been pretrained with a classifier. The classifier is trained using 800 images manually labelled to train the classifier. The model was trained for 900992 images with an average loss ratio of 0.016559. To ensure robustness within the system and to optimize the detection rate and computational needs, the vision sensor only captures frames every 100 milliseconds and process these frames through the object detection model.

Detection Flowchart.png

The distance data is filtered through Alpha trimmed mean filter and a Kalman filter before being packed from the main function to support functions indicated with “Filtered Distance Data” in the figure Detection Flowchart II. Orientation data is filtered through complementary filter and sent from the main function to support functions for the hypothesis generation phase. Hypothesis is verified in the Hypothesis verification phase when data from the vision sensor returns a state value which is “0” for no rover detection and “1” for any rover detected. The data is then fused together and passed to the support function that calculates rectangular coordinates using the polar coordinates from distance for radius and angle from IMU sensor. The rover is now localized within its frame and this value is passed over to support functions. Support functions calculate the global frame using the rover’s coordinates as a reference point. Adding the local coordinates to the self-reference point allows us to locate rovers in the global frame.

Detection Flowchart II.png


The following filters are necessary for robust performance of the detection module, they also provide a level of fault tolerance to the system.

Kalman Filter

Kalman filter is a recursive algorithm that provides the optimal estimate of the system state, given the noise in the system. It tries to achieve the optimal estimation of the unknown configuration based on the previous estimate and new measurement data. The filter updates at fixed time intervals specified in the model. Kalman filter is used as a fault tolerant system that will estimate the optimum distance to the object even if the LiDAR and SONAR ranging sensors provide faulty ranging data. The filter will approach the optimal estimate with increase in iterations of the filter.

Complementary Filter

Complementary filter is a recursive algorithm that provides the optimal estimate of the gyration data. It tries to achieve the optimal estimation of the angular position and gyration based on the previous estimate and a combination of low pass and high pass filters. Complementary filter is used to take slow moving signals from accelerometer and fast-moving signals from a gyroscope and combine them as sensor fusion. Data from accelerometer and gyroscope are enough to estimate angular positions as such there are the only inputs to the Complementary filter. Data from the gyroscope is used because it is very precise and not susceptible to external forces. Data from the accelerometer is used as it is less susceptible to noise due to drift. The filter updates these measurements at fixed time intervals specified in the model considering previous measurements while estimating new data.

Alpha Trimmed Mean Filter

Alpha trimmed mean filters are a variation of adaptive mean filters that are especially effective against additive gaussian noise. The filter is an adaptation from image processing used to remove impulse noise as they are a good compromise between moving mean filter and median filter. They are used if the nature of noise in the system deviates from the gaussian with impulsive noise components. The filter is a moving window-based filter which iterates over the continuous stream of data from ranging sensors over a given window size of w and trimming factor of α. Adaptive alpha trimmed filters allow for the flexibility of choosing α based on the local noise statistics.

Results

Conclusion

Reference

[1] Wooldridge, M (2002). An Introduction to MultiAgent Systems. John Wiley & Sons. ISBN 978-0-471-49691-5

[2] Balaji, P., & Srinivasan, D. (2010). An introduction to multi-agent systems. Studies in Computational Intelligence, 310, 1-27.

[3] Hong-Jun M., & Guang-Hong Y. (2016). Adaptive Fault Tolerant Control of Cooperative Heterogeneous Systems With Actuator Faults and Unreliable Interconnections. IEEE Transactions on Automatic Control, 61(11), 3240-3255.

[4] Oh k, Park M, & Ahn H. (2015). A survey of multi-agent formation control. Automatica, 53, 424-440.

[5] Khatib, O. (1986). Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. The International Journal of Robotics Research, 5(1), 90–98. https://doi.org/10.1177/027836498600500106