Projects:2020s1-1310 How Machines Think: Multi-entity Autoencoder

From Projects
Revision as of 23:47, 23 April 2020 by A1707256 (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search



Project Team

Project student

  • Nick Wynd

Supervisors

  • Cheng-Chew Lim (University of Adelaide
  • Daniel Gibbons (DST)
  • David Hubczenko (DST)
  • Jijoong Kim (DST)

Abstract

Over the course of past decades there has been a rapid rise in Artificial Intelligence. It is present almost everywhere in social media, in technology and in intelligent agent coordination. Yet, it is unclear how Artificial Networks, such as Deep Neural Networks, understand their surrounding environment. To determine how these architectures visualise their environment, state of the art Deep learning architectures will be investigated. These architectures and their ability to map multiple entities in an environment will be verified through this project.

Introduction

The aim of this project is to investigate suitable neural architectures for representing multiple entities within an environment. This will involve implementing a decoder in state of the art Deep Neural Networks to verify whether encoding methods of these architectures are able to accurately encode environmental entities. Through this project, state of the art Neural Architectures will be compared against a set of criteria in order to find a suitable network that can be applied to a Capture the Flag game scenario.