Projects:2021s2-63332 A collaborative solution for advancing Australian manufactures

From Projects
Jump to: navigation, search

fsfsd Abstract here

Introduction

Collaborative technology is the frontier innovation for taking advanced autonomous systems that already work intelligently, such as assisting in search and rescue, bushfire fighting and military missions, and making them even more effective by working collaboratively. Human-machine collaborations are a more effective use of autonomous and robotic systems because it combines the characteristics of both human’s creativity and cognition and machine’s accuracy and efficiency. Particularly in the manufacturing sector, humans and machines working interactively side-by-side are more productive than using any of themselves.

Project team

Project students

  • Liangchen Shu
  • Yuzuo Zhu
  • Guangzu Shi

Supervisors

  • Prof. Peng Shi
  • A/Prof. Rini Akmeliawati

Advisors

  • Xin Yuan
  • Yang Fei

Objectives

The aim of the project is to design a human-robot collaboration approach to improve Australian manufacturers. The project mainly focuses on one scenario to apply the human-robot collaboration method to prove the feasibility of the project design. To achieve the project goal, the specific human-robot collaboration innovation needs to be confirmed for further design, and the integration of robotics and autonomous control theories is applied in the project design for the scenario implementation. And there are three objectives to achieve the project goal.

There are three objectives to achieve the project aim:

* Discovering innovations in using collaborative technologies could be applied on
* Building a practical scenario and developing the cobot prototype to be applied to the Australian manufacturers.
* Prove the feasibility of the designed human-robot collaboration system.

Background

Through the study of Australia's manufacturing branches and industrial structure, it can discover that some high-end manufacturing directions have considerable potential for manufacturing efficiency through the application of Human-machine collaborative technology. For instance, Food Product Manufacturing has the highest percentage of manufacturing employees at 23.1%, the use of collaborative robots may help reduce labor intensity during manufacturing. [2]

In recent years, with the increasing market demand and the strong support of technology development, industrial robots have entered a stage of rapid development, and various robots can be competent for more and more jobs. "Robot substitution" has been included in manufacturing, service industry, and other industries. Although there are discordant theories such as robots competing with humans for work and robots replacing humans. In fact, humans and robots can be in a relationship of mutual assistance and coexistence: robots can assist humans to do some complicated and heavy work, and humans can adjust production according to actual needs. Traditionally, workers have always maintained a safe physical separation from large industrial machinery and equipment. Large-scale manufacturing equipment like the industrial robot has the characteristics of high voltage, fast running speed, and large operating range, which provides the high production efficiency, but brings the hazard of the harness around. [3] Therefore, most of the industrial robots in the factory are isolated by fences or warning areas set, also the use of sensors for warning shall alarm the accidental break-in, in order to meet the ANSI/RIA Robot Safety Standard — R15.06 and ensure a safe manufacturing environment. [4] In contrast, a collaborative robot (cobot) is a type of robot that is used to assist humans in completing specific tasks or allow humans and robots to work simultaneously in a shared workspace, due to its unique design the size is smaller, and the operation speed is lower, also the anti-collision sensor is installed to avoid great harm to people and objects in the most surrounding environment. Human-robot collaboration is an inevitable choice for robot evolution. It is characterized by safety, ease of use, and low cost. Workers can operate it like an electrical appliance. [5] The main difference between collaborative robots and industrial robots is that the application scenarios and corresponding target groups of these two robots are different. Cobots fill the gaps in the industry with their small and portable features. [6] In addition, the modular design of the cobot can accept personal customization by simply replacing modular components. Besides, the operator of the cobot is merely required to understand its functions and characteristics through simple training to operate. The cobot system interface is designed simpler and more intuitive, reducing the requirements for training time and technical level. In terms of applications, the built-in sensors of collaborative robots can detect obstacles and surrounding people, in order to reduce or even avoid harm. However, compared with industrial robots, collaborative robots have inherent deficiencies in the application of some high-intensity tasks due to their slower motivation speed and lower carrying capacity.

Method

Virtual Platform Design

To implement this aircrafts parts inspection scenario on the virtual platform. The CoopeliaSim (V-rep) is a suitable software to provide the simulation platform. The build-in Api functions could be integrated with Matlab to achieve the designed system functions. Inside this software, there are several existing models could be directly used to create this specific scene. For this case, the Universal Robot edition 5 (UR5e) is the most suitable choice due to its relatively the largest size among the robotic arms. “The UR5e is a lightweight, adaptable collaborative industrial robot that tackles medium-duty applications with ultimate flexibility. The UR5e is designed for seamless integration into a wide range of applications.” The size of UR5e could provide wide operation range to carry the vision sensor to scan the multiple sides of one aircraft part with enough payload (5 kg) lift and relatively long reach (850mm). Then, the vision sensor is perspective projection-type which the field of view of perspective project-type vision sensor is trapezoidal. They are well suited for camera-type sensors. So, the perspective vision sensor could scan the entire aircraft part and generate the image to show every detail. The last part is mobile base. The selection of mobile base is KUKA Omnirob. This large size omnidirectional rover carries the robotic arm and its control box (actual physical structure) to each destination with high payload(170kg) and achieves a positioning accuracy of up to +/- 5 millimetre, even in the tightest spaces.

The scanning path is designed to cover each side of the given object like the figure presents. The target dummy is created to follow the path, and the target dummy also leads the robotic arm end position to accomplish the robotic arm following the path. The scanning path is from the object bottom to the top on each side, then integrating the UR5e operation range, the limit of up-down path range is set to 0.58827 unit in virtual environment. Then, the up-down path is designed to a circular path to fit in various of scanning objects. After completing one motion from bottom to the top, the target is designed to move left for 0.3 unit in virtual environment to execute the next motion. The whole scanning process is the target moves from bottom to the top, then moves to left and moves from top to the bottom. Eventually, the targets move around the whole object to generate the whole scanning path to lead the robotic execute the scanning process.

The human-robot collaboration is developed via app designer in MATLAB. A GUI is created for human-robot collaboration. The app designer can be programmed to adding buttons and make buttons functionality. There are three sections are designed to achieve the human-robot collaboration function. The first section is creating arrow keys for human manually operate the robotic arm motion. These arrow keys are linked the Coppeliasim api functions for manually change the robotic arm end [x, y, z] orientations. Same as the second section for omnidirectional rover position control by linking the arrow keys to the api motion functions. Another part in the second section is vision sensor image captured. A window is designed to transmit the images of object surface obtained by vision sensor from Coppeliasim to app designer GUI. The transmission frequency is set to 15 for human could receive 15 images per second, so that the images are enough for details validation. The last section is designed for message transmission. The vision sensor could detect the distance from the object surface and identify the potential cracks or dent. So, the calculated values are also sent back to GUI for human validation to eliminate the errors or fail detection.

Physical Platform Design

The physical platform is constructed with Aubo i5 and Intel Realsense D435i as the robot main body. Each robotic arm link dimension of the Aubo i5 is closely similar with UR5e’s with 886.5mm operation range and 5kg payload. The Intel Realsense D435i depth camera provides the depth information detection and rgb camera node. And the Intel Realsense D435i provides up to 90 FPS (frame per seconds) image capture rate to identify the cracks and dents with enough samples to evaluate.And the ROS is selected for the robot development environment. The ROS master links all the launch nodes to operating together to implement the designed system operation. And the moveit! is integrated with ROS to provide the inverse kinematics plugin and move robotic arm group planner to control the robotic arm following the scanning path by providing EOA poses.

The primary ROS client libraries are optimised for Unix-like systems due to its reliance on extensive collections of open-source software dependencies. ROS is a framework that sits atop an operating system. This permits it to abstract hardware from software. This means the user could program the robot without having to deal with the hardware. At least in the field of service robots, ROS is quickly becoming the industry standard for robotics programming. ROS began in academia but quickly spread throughout the business world. ROS is being used by an increasing number of organisations and startups every day. Every robot had to be programmed using the manufacturer's API before ROS. This means that changing the robot requires not only learning the new API but also restarting the entire software. To understand how each software works, users must also have a good understanding of how to interact with the robot's electronics. This is analogous to the situation with computers in the 1980s, when each computer had its own operating system and users had to write the same program for each one. ROS is to robots what Windows and Android are to computers. Users can write programs that can be shared between other robots if you have a ROS-based robot (one that runs on ROS). For example, this project designing a scanning process for an Aubo i5 robotic arm that can be applied on other company’s robotic arm.

Additionally, Aubo i5 can be operated by a teach pedant which provides the basic functions including position control, force control and simple programming for robotic arm path generation. However, the teach pedant has huge limitation that the programming interface only contains basic functions for path generation, so that it has low efficiency to program the robotic arm to execute complicated tasks. Also, the Aubo i5 could communicate with other appliance via IO pins which is more complicated and not efficient way to operate multiple devices at the same time. Due to the ROS supports python and C++ programming languages and relies on extensive open-source software dependencies, it is the most efficient way to design the cobot system and achieve the multiple devices collaboration.

The PC with ROS environment needs to connect to the robotic arm that the robotic arm motion could be programmed based on ROS. So, the most suitable way is using a network cable to connect the pc and Aubo i5 control box together to implement the ethernet communication. Also, the IP address of the pc and robotic arm control box has to be set to the same frequency band and the Net mask and Gateway have to be set to the same. After that, ping the robotic arm IP address in terminal to verify if the connection is successfully built or not. Then, launch the moveit_planning_execution launch node to control the real robotic arm for motion planning by manually drag the target to lead the robotic arm update the target pose after pressing the plan and execute button.

The final step of scanning process design is programming the robotic arm to implement the arm follow the scanning path and communicate with camera published messages. The entire design step contains two major sections which are robotic arm motion planning and arm-camera interconnection.

Robotic Arm Motion Planning: The most significant step is to generate the scanning path and control the robotic arm motion to execute the scanning path. The robotic arm motion control is programmed on C++, which includes the Moveit! Api functions of robotic arm move group controller and collision avoidance. The scanning path is as same as in the simulation platform. The circular path allows the robotic arm scan from bottom to the top of the object surface each side. In programming, the scanning path is made of multiple of waypoints which stored into a geometry_msgs type vector, and all the waypoints in this vector can be computed using Cartesian path means arm moves straight line, in the end, these calculated cartesian paths are accomplished with trajectory. So, the more points stored, the path will be smoother and allow the camera always to maintain the stable orientation and distance to the object surface. Moreover, the chosen object has three side of surfaces needs to be scanning, so after completing one side scanning, the base joint needs to rotate 90 degrees to control the camera always facing to the object surface, and the robotic arm execute the same scanning path generated before. The whole process is fully automatic, and the scanning path scale and orientation is determined by the size and location of the selected object. So, changing the operation angle and radius can achieve this function to switch scanning objects.

The next step is path optimisation. To transform the trajectory to the robotic arm joints state, the inverse kinematics plugin plays a significant role in the path planning process. The original utilisation of IK is KDL kinematics solver which could only provide the operation radius up to 20 centimetres based on the generated path. To increase the operation radius to ensure the robotic arm scan the object surface from bottom to top, the IKFast kinematics solver needs to be applied on the path planning to optimise the scanning path. The Robot Kinematics Compiler, often known as IKFast, is a sophisticated inverse kinematics solver included in Rosen Diankov's OpenRAVE motion planning software. IKFast can solve the kinematics equations of any complex kinematics chain analytically and create language-specific files (like C++) for subsequent usage, unlike other inverse kinematics solvers. On modern processors, the ultimate result is incredibly reliable solutions that can run in as little as 5 microseconds. After applying the IKFast kinematics solver in the robotic arm configure, the robotic operation radius increases to 40 centimetres which is double the previous operation range, so that the scanning path could applied on different object surfaces to accomplish the cracks and dent detection.

Another significant factor needs to be considered is collision avoidance. In the real world, the robotic arm did not know the size and the location of the object and other collision objects. So, the ROS provides the RVIZ interface to manually create the collision object in the same coordinate system with the robotic arm base joint. Due to the ROS master launches all the required nodes simultaneously, the robotic arm is capable of recognize the collision objects in its own coordinates system, and the Moveit! provides the updated move_group planning to avoid the arm colliding to the collision objects and follow scanning path in real-time.

Arm-camera Interconnection: The Robotic arm is interconnected with Intel Realsense D435i to identify the cracks and dents when executing the scanning path. Also, the robotic arm needs to record the identified cracks and dent locations for human validation if human needs to view the cracks or dents in real-time not only from the saved image.

The first step is to create connection between the robotic arm and the camera. The camera is attached on the EOA with the same orientation using a 3D printed base. Then, the camera connects to another pc for launching the camera recognition node, and that pc connects to the ROS, which could communicate the camera detection information between these two devices. The camera node uses socket() methods to transmit the data to ROS based on TCP/IP protocol, and the ROS node could use the same method to receive the message when the socket has been triggered. In the end, the receiving message node launches under the ROS master, so that the robotic arm motion node could receive the same message when camera node sends the information by triggering the cahtterCallback() function.

Additionally, the information sent from the camera node is considered as a trigger to control the robotic arm record the current pose and label the index of current recorded pose into a validation vector. This means the recorded pose indicates the identified cracks or dent location, and if human needs to view the cracks or dents in real-time, the robotic arm could automatically back to the recorded pose with provided number of recorded poses, at that time, the camera would present the identified cracks or dents for human validation.

Results

Conclusion

References

[1] a, b, c, "Simple page", In Proceedings of the Conference of Simpleness, 2010.

[2] "Manufacturing | Labour Market Insights", Labourmarketinsights.gov.au, 2020. [Online]. Available:https://labourmarketinsights.gov.au/industries/industry-details?industryCode=C.[Accessed: 15- May- 2022].

[3]C. S. Franklin, E. G. Dominguez, J. D. Fryman, and M. L. Lewandowski, collaborative robotics: New era of human-robot cooperation in the workplace. ScienceDirect, 2020, pp. 153–160.

[4]I. Healthcare and W. Karwowski, "Robot Safety Standard — R15.06", in International Encyclopedia of Ergonomics and Human Factors - 3 Volume Set, 2nd ed, pp. 2768-2771, Boca Raton, 2006.

[5]C. S. Franklin, E. G. Dominguez, J. D. Fryman, and M. L. Lewandowski, collaborative robotics: New era of human-robot cooperation in the workplace. ScienceDirect, 2020, pp. 153–160.

[6]A. Realyvásquez-Vargas, K. Cecilia Arredondo-Soto, J. Luis García-Alcaraz" Introduction and configuration of a collaborative robot in an assembly task as a means to decrease occupational risks and increase efficiency in a manufacturing company", Robotics and Computer-Integrated Manufacturing, vol. 57, pp. 315-328, 2019.