Projects:2017s1-122 On-Chip Learning

From Projects
Revision as of 21:09, 21 September 2017 by A1681557 (talk | contribs) (Group members)
Jump to: navigation, search

Supervisor

Dr Braden Phillips

Group members

Xiaoyang Dai /n Tao Zeng

Abstract

This thesis explores efficient methods to implement a digit classification neural network on a chip. Various approaches are evaluated. Following the score of this study, comprehensive comparison and evaluation are made among a group of available weight quantization methods, power-of-two and normalized power-of-two quantization are determined as two first-rate quantization approaches. Further, three network optimization strategies including neuron replacement, alternative activation functions and matrix multiplication optimization are raised and tested in this thesis to make the target neural network more hardware friendly and simplify the hardware implementation. The present results are essential components of the proposed outcomes and indicate the future steps of this study.

Introduction

Deep learning neural networks is becoming popular and being widely researched in a large range of areas because of its magnificent performance, strong adaptability and bright commercial prospect. However, as a result of deep learning neural networks’ strict requirements to hardware platforms and energy support, so far, the applications of neural networks in the areas which have restrictions in hardware resource and energy consumption such as the Internet of things are limited. Consequently, the efficient and commercial implementations of neural networks are in high demand[1]. In this thesis, Field-Programmable Gate Array (FPGA) based implementations of a digit classification deep neural network is researched with focusing on the necessary weight quantization strategies and the network optimizations for FPGA.