Projects:2018s1-110 Future Submarine Project

From Projects
Revision as of 00:13, 22 October 2018 by A1687420 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Project Team

Students

Tharidu Maliduwa Arachchige

Jacob Parker

Supervisors

Dr. Danny Gibbins

Igor Dzeba (SAAB)

Abstract

This project involves the research into and development of a Contact Detection method to be used with a Submarine Optronics System. The aims of this project include:

  1. Detecting the presence of a ship at long range, and if possible estimate the range and aspect of the ship
  2. At closer range, the ship or target should be detected, the type of ship must be identified/verified and its range and aspect/orientation information should be determined.
  3. Certain challenges must be overcome, such as:
    1. Accumulating a sufficient database storing ship models, for the large number of different ship types that would need to be recognised by the system.
    2. Limited viewing condition as:
      1. The submarine's periscope often only reach a height of 60cm above the surface of the water
      2. Tall waves and curvature of the Earth may mean that the ships are only partially visible
      3. Bad weather conditions means that a clear horizon line may not be visible, and ships/targets blend in with the backgrounds
    3. There may be non-significant anomalies in the obtained image that are not ships/targets, and should be ignored by the system (e.g. landmasses, birds/sea-ainmals on the surface, infrastructures on the surface of the water).

The project will involve thorough literature review for object detection methods using image processing techniques in maritime applications, software development and testing. This is an industry sponsored project, sponsored by SAAB Australia.

Introduction

Aim

The project aimed to fulfil several goals, including:

  1. Research and review existing image processing techniques on the topic of object and/or ship detection and recognition through optical images. An understanding of existing techniques is important to determine whether or not and how they can be incorporated into any designs determined for the output of the project.
  2. Develop a system that
    1. At a close range to a detected ship, can recognise the ship type, estimate its range from the submarine and estimate its orientation.
    2. At long range to a detected ship, provide guidance as to what the ship might be (type verification) and provide an estimate of its range and orientation.

Motivation

The project was sponsored by Saab Australia, who have a major influence in the Australian Defence industry. With the upcoming submarine research and development plans, Saab Australia is highly motivated in conducting research into different technologies that may be comprised in a submarine.

For more than a century, submarine operators relied solely on direct-view optical periscopes to gain an insight to the environment above the surface of the water [1]. A long part of this time consisted of operators relied on black and white images of ship silhouettes to identify any vessels that are viewed through the periscope. However, with the evolution of technology, electronic periscopes has been developed, providing many forms of assistance to the operators in many ways. The motivation for the research conducted in this project is to develop and test image processing techniques for ship and threat detection through the optical images that can be obtained through an electronic periscope of a submarine, providing guidance to the operators in identifying the environment around them.

Research Methodology

Three horizon detection methods and two target detection methods were researched and implemented as a result of this research methodology. Of which, two horizon detection methods and one target detection method with possible enhancements are discussed in this thesis. An overview and some results of the methods investigated by the project partner as a comparison to the methods described in detail in this document. Those approaches can be found in the thesis document produced by the project partner, which is listed in the references.

For testing the produced algorithms, a large data set of imagery was required. This was acquired preliminarily by using imagery available from the internet. An issue with this method of data collection lied in the fact that the sources of the imagery were inconsistent, resulting in an inconsistency in the quality and the level of relevance of the imagery to the application. Thus, further imagery was obtained first hand by recording footage of a ship dock, where there were plenty of movement of different ships arriving and departing the docks. Single frames were then extracted from the capture to test the algorithms.

Horizon Detection Methods

Hough Transform Method

A horizon detection algorithm using the Hough Transform was implemented. The algorithm takes in a colour image but performs its processing steps only on luminance component, so the image is converted to greyscale.

An outline of the developed algorithm follows:

  1. Load input image in colour and extract luminance component.
  2. Apply Gaussian filter smoothening of the image.
  3. Execute a Canny edge detector to produce a binary edge map of the image.
  4. Compute the Hough Transform of the binary edge image
  5. Find the peaks in the Hough Transform data that corresponds to the most dominant line in the image.
  6. Plot the found line using its ‘Rho’ and ‘Theta’ components.

MATLAB’s in-built functions for the Canny Edge Detector and Hough related functions were used for this implementation. As discussed in section 2.1, the Canny Edge Detector is optimal when the input image is pre-processed for smoothening, in order to remove image noise. The Canny detector used in the implementation does not incorporate this, so a Gaussian filter with standard deviation of 1.4 [5], is applied to achieve this noise reduction. The implementation makes use of two Hough transform related functions - hough() and houghpeaks(). The former is used to form and plot the Hough transform of the edge map produced by the Canny edge detector and the latter is used to find its peak, representing the most dominant line in the input image. By default, the hough function is calibrated around vertical lines such that a vertical line will be read at an angle of 0° and horizontal lines at ±90°. As the plot of the Hough transform is between -90° and 90°, the horizontal lines are plotted at the edges of the graph, causing inaccuracies in the determination of peaks. This was resolved by rotating the input image by 90°, and searching for a now-vertical line in the image as the horizon. This enables the possibility of narrowing the window of the Hough transform plot to -15° and 15°, reducing the computational load.

Once the Hough transform plot is obtained, the houghpeaks() function is used to locate the peak of the Hough transform plot. For this peak the corresponding ρ and θ values are extracted. Compensating for the rotation, these two values can be used to plot a line, on the original image, with equation: y=(ρ-xsin(θ))/(cos⁡(θ)).


DCT-based Method

This section discusses a target detection algorithm using the characteristics of the Discrete Courier Transform (DCT) coefficients. The initial step described in the algorithm is a horizon detection algorithm using these characteristics, before modelling the sea-surface with a Gaussian Mixture Model (GMM) for ship or object detection.

This horizon detection process involves decomposing the luminance component of an input image into 8x8 blocks in order to apply DCT to these blocks. Each 8x8 DCT block is then labelled with a t-score¬, which is a ratio of the mean (A ̅) of each block to the maximum mean over the entire image (A ̅_max); that is t=A ̅/A ̅_max . Optimally, the t-scores obtained from the sea segment will have different range to those obtained from the sky regions of the image, producing a bimodal distribution of these values ranging from 0 to 1. Thus, by selecting a threshold for which t values above the threshold belong to sky regions and below the threshold represent blocks in the sea. The reference paper claims that with 95% confidence that the ideal thresholding value is one from the interval between 0.065 and 0.135. However, it was discovered that this was not consistently true for the test data available, and the optimal threshold varied vastly with varying input images. This was solved by using a function that automatically calculates the threshold by analyzing the distribution of the data in question. The function graythresh() is an in-built function in MATLAB, and assumes that there is a clear bimodal separation in the data. Using the thresholding, an initial segmentation of sea and sky in the image is obtained. Another variation to the implementation described in the paper lies in the determination of the horizon line. Zhang draws a horizon line approximately using the central points of all bottommost blocks in the sky region. The adapted implementation, however, uses the Hough Transform method discussed in section 4.1.1 on the segmented binary image to find the location of the horizon line.

An outline of the DCT-based horizon detector follows:

  1. Load input image in colour and extract luminance component.
  2. Resize the image to dimensions divisible by 8.
  3. For each 8x8 block of pixels in the image
  4. Calculate its DCT coefficients
  5. Calculate the mean of the DCT coefficients, disregarding the first AC element, for that block (A ̅)
  6. Obtain the t-score¬ for each block by dividing the means by the maximum of the means, t=A ̅/A ̅_max .
  7. Obtain the threshold value for all t-scores. This is the value that separates t-scores belonging to the sky from those belonging to the sea pixels. Using this value, test each t-score to determine whether the 8x8 block, from which the score derived from, belongs to the sky component or sea component of the image and obtain a sea-sky segmentation of the image.
  8. Input the segmented image to the Hough Transform line detector outlined in section 4.1 to find the horizon using the DCT-based segmentation.

Target Detection Method

DCT and GMM Based Target Method

This section will describe the primary target detection method developed in this project, which uses the DCT coefficients as obtained in the DCT-based horizon detector outlined in section 4.1.2. The approach taken in this implementation is to split the image into two components, sea and sky, using a horizon line detector. From the literature, the algorithm from uses this horizon detector to find the line between sea and sky. However, the algorithm adapted for this project will use the Hough Transform derived horizon line detector outlined in section 4.1.1 for a more accurate detection. The target detection process will use the average of the DCT coefficients in each 8x8 blocks to segment the entire image into two components - sky and not-sky. This achieved by taking the DCT coefficients in each 8x8 block of image pixels and taking the normalized mean as in the DCT-based horizon detection algorithm (4.1.2). Similarly, a threshold value that best separates the DCT averages from sky regions to other regions is found. All blocks with a DCT average less than this threshold value can be classified as sky, while the remaining blocks are classified as not sky. This process will achieve a ‘sea’ and ‘not-sky’ segmentation of the image. While this is an effective way to eliminate the sky as a background from the image, the complex textures on the sea surfaces such as waves and wakes, some regions in the sea will pass through the threshold as significant foreground pixels. Thus, the DCT average threshold testing is only applied to the area of the image that resides above the horizon line, which is found using the Hough Transform horizon line detector. For the sea region of the image, the texture-based features are extracted as three regions from each 8x8 block of DCT coefficients, as explained in the literature (section 2.2), and listed in a feature vector, X. The feature vector is used to train a Gaussian Mixture Model to represent a sea background model. Hence, the sea pixels used for the training must be a sufficient amount of pixels below the horizon line, to ensure that pixels from potential targets do not contaminate GMM training data. Once the GMM is trained, a Mahalanobis distance is calculated for each of the sea-training feature vectors to measure the matching degree between the feature vectors and the Gaussian mixtures and define a threshold to be the maximum distance from a feature vector to the Gaussian mixture model centre. Once this threshold is set, the feature vectors are extracted from the image for all pixels below the horizon line, including any belonging to potential targets. The Mahalanobis distances between each vector and the GMM for the sea background are then calculated and compared with the defined threshold. All calculated distances that are less than the threshold are classified as belonging to sea regions, while others maybe sky or an anomaly. This produces another segmentation of ‘sea’ and ‘not-sea’. By subtracting the sea segmentation from the sky segmentation, a binary mask can be obtained where the two background components – sea and sky – will be 0s and potential targets will be 1s. The binary will however have false detections due to outliers in the image data in both the sea and sky regions which can be cleaned up using morphological operations such as erosion and dilation. Finally, the detected target is displayed by applying bounding boxes on the remaining regions of 1s in the final binary mask and overlaid on the original input image. An outline of the algorithm follows:

  1. Load input image in colour and extract luminance component.
  2. Resize the image to dimensions divisible by 8.
  3. For each 8x8 block of pixels in the image
  4. Calculate its DCT coefficients
  5. Calculate the mean of the DCT coefficients, disregarding the first direct current element, for that block (A ̅)
  6. Obtain the t-score¬ for each block by dividing the means by the maximum of the means, t=A ̅/A ̅_max .
  7. Obtain the threshold value for all t-scores. This is the value that separates t-scores belonging to the sky from those belonging to the sea pixels.
  8. Apply the Hough Transform Horizon Detector to find the horizon line.
  9. Using the threshold value, test each t-score from regions above the horizon line to determine whether the 8x8 block belongs to the sky or an anomaly and produce a binary image where the sky background pixels have value 0 and areas of interest have value 1.
  10. Extract the region energies for DCT blocks that are located at least 16 pixels below the horizon and compile the energies into a feature vector of 3 columns, X_train.
  11. Fit a Gaussian Mixture Model to the feature vector.
  12. Compute the Mahalanobis distance between the GMM and each feature vector. Use the maximum distance found as a threshold, T.
  13. Extract the regions energies for all DCT blocks below the horizon line, and compile a feature vector, X.
  14. Calculate the Mahalanobis distance between the GMM and each feature vector.
  15. Use the threshold T to determine whether each block belongs to the sea background or an anomaly on the sea surface.
  16. Combine the two segmentations in order to achieve a binary mask where only regions of interest have high value.
  17. Eliminate any detected regions located sufficiently above and below the horizon line to reduce false detections.
  18. Perform a morphological dilation to increase the detection size and apply a bounding box around the detection. The dilation will help ensure the entire object is bound within the applied box.