<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1798511</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1798511"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1798511"/>
	<updated>2026-04-23T12:48:53Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s1-13352_Graph_Neural_Networks_for_Detecting_Insider_Threats&amp;diff=15968</id>
		<title>Projects:2021s1-13352 Graph Neural Networks for Detecting Insider Threats</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s1-13352_Graph_Neural_Networks_for_Detecting_Insider_Threats&amp;diff=15968"/>
		<updated>2021-04-06T06:03:52Z</updated>

		<summary type="html">&lt;p&gt;A1798511: /* Supervisors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s1|13352]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Insider threats are users with legitimate access to company assets who use that access, whether maliciously or unintentionally, to cause harm. Insider threats  account  for  60  percent  of  recent  data  breaches.  There  are major gaps  in  current  insider  threat  defence  with  a  lack  of  techniques  and solutions to identify insider attack activities in real-time. In this project, we will develop novel Graph Neural Network (GNN) models to  learn  from  network  traffic  data  and  user  activities  to  classify  network users into security classes. We will start by using GNN to learn baseline of normal  behaviour  for  each  user  or  machine.  Deviations  from  normal activities then can be flagged as abnormal. Using GNNs, deviations will be tracked not only for a specific user but also compared to other users in the same location, with the same job title or job function.&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Anh Tuan Phu&lt;br /&gt;
* Quang Huy Ngo&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* Dr. Hong Gunn Chew&lt;br /&gt;
* Kyle Millar&lt;br /&gt;
* Prof. Hung Nguyen (TRC)&lt;br /&gt;
&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Set of objectives&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ..&lt;/div&gt;</summary>
		<author><name>A1798511</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s1-13352_Graph_Neural_Networks_for_Detecting_Insider_Threats&amp;diff=15967</id>
		<title>Projects:2021s1-13352 Graph Neural Networks for Detecting Insider Threats</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s1-13352_Graph_Neural_Networks_for_Detecting_Insider_Threats&amp;diff=15967"/>
		<updated>2021-04-06T06:03:32Z</updated>

		<summary type="html">&lt;p&gt;A1798511: /* Project students */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s1|13352]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Insider threats are users with legitimate access to company assets who use that access, whether maliciously or unintentionally, to cause harm. Insider threats  account  for  60  percent  of  recent  data  breaches.  There  are major gaps  in  current  insider  threat  defence  with  a  lack  of  techniques  and solutions to identify insider attack activities in real-time. In this project, we will develop novel Graph Neural Network (GNN) models to  learn  from  network  traffic  data  and  user  activities  to  classify  network users into security classes. We will start by using GNN to learn baseline of normal  behaviour  for  each  user  or  machine.  Deviations  from  normal activities then can be flagged as abnormal. Using GNNs, deviations will be tracked not only for a specific user but also compared to other users in the same location, with the same job title or job function.&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* Anh Tuan Phu&lt;br /&gt;
* Quang Huy Ngo&lt;br /&gt;
&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* &amp;lt;Dr. Hong Gunn Chew&amp;gt;&lt;br /&gt;
* &amp;lt;Kyle Millar&amp;gt;&lt;br /&gt;
* &amp;lt;Prof. Hung Nguyen&amp;gt; (&amp;lt;TRC&amp;gt;)&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Set of objectives&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ..&lt;/div&gt;</summary>
		<author><name>A1798511</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s1-13352_Graph_Neural_Networks_for_Detecting_Insider_Threats&amp;diff=15966</id>
		<title>Projects:2021s1-13352 Graph Neural Networks for Detecting Insider Threats</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2021s1-13352_Graph_Neural_Networks_for_Detecting_Insider_Threats&amp;diff=15966"/>
		<updated>2021-04-06T06:03:14Z</updated>

		<summary type="html">&lt;p&gt;A1798511: Created page with &amp;quot;Category:Projects Category:Final Year Projects 13352 Abstract here == Introduction == Insider threats are users with legitimate access to company a...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Projects]]&lt;br /&gt;
[[Category:Final Year Projects]]&lt;br /&gt;
[[Category:2021s1|13352]]&lt;br /&gt;
Abstract here&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Insider threats are users with legitimate access to company assets who use that access, whether maliciously or unintentionally, to cause harm. Insider threats  account  for  60  percent  of  recent  data  breaches.  There  are major gaps  in  current  insider  threat  defence  with  a  lack  of  techniques  and solutions to identify insider attack activities in real-time. In this project, we will develop novel Graph Neural Network (GNN) models to  learn  from  network  traffic  data  and  user  activities  to  classify  network users into security classes. We will start by using GNN to learn baseline of normal  behaviour  for  each  user  or  machine.  Deviations  from  normal activities then can be flagged as abnormal. Using GNNs, deviations will be tracked not only for a specific user but also compared to other users in the same location, with the same job title or job function.&lt;br /&gt;
=== Project team ===&lt;br /&gt;
==== Project students ====&lt;br /&gt;
* &amp;lt;Anh Tuan Phu&amp;gt;&lt;br /&gt;
* &amp;lt;Quang Huy Ngo&amp;gt;&lt;br /&gt;
* &amp;lt;Student 3&amp;#039;s name&amp;gt;&lt;br /&gt;
==== Supervisors ====&lt;br /&gt;
* &amp;lt;Dr. Hong Gunn Chew&amp;gt;&lt;br /&gt;
* &amp;lt;Kyle Millar&amp;gt;&lt;br /&gt;
* &amp;lt;Prof. Hung Nguyen&amp;gt; (&amp;lt;TRC&amp;gt;)&lt;br /&gt;
==== Advisors ====&lt;br /&gt;
*&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
=== Objectives ===&lt;br /&gt;
Set of objectives&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
=== Topic 1 ===&lt;br /&gt;
&lt;br /&gt;
== Method ==&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusion ==&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] a, b, c, &amp;quot;Simple page&amp;quot;, In Proceedings of the Conference of Simpleness, 2010.&lt;br /&gt;
&lt;br /&gt;
[2] ..&lt;/div&gt;</summary>
		<author><name>A1798511</name></author>
		
	</entry>
</feed>