<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1696617</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1696617"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1696617"/>
	<updated>2026-04-24T02:44:27Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12872</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12872"/>
		<updated>2019-09-21T15:55:38Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Project Team */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12871</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12871"/>
		<updated>2019-09-21T15:51:05Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Introduction */  dataset added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
Fengyi Yang and Elaine Kuan&amp;lt;br /&amp;gt;&lt;br /&gt;
Supervised by Prof. Cheng-Chew Lim and Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12870</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12870"/>
		<updated>2019-09-21T15:49:37Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Objectives */  break line&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
Fengyi Yang and Elaine Kuan&amp;lt;br /&amp;gt;&lt;br /&gt;
Supervised by Prof. Cheng-Chew Lim and Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12869</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12869"/>
		<updated>2019-09-21T15:49:10Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Objectives */  list&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
Fengyi Yang and Elaine Kuan&amp;lt;br /&amp;gt;&lt;br /&gt;
Supervised by Prof. Cheng-Chew Lim and Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12868</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12868"/>
		<updated>2019-09-21T15:45:37Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Project Team */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
Fengyi Yang and Elaine Kuan&amp;lt;br /&amp;gt;&lt;br /&gt;
Supervised by Prof. Cheng-Chew Lim and Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12867</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12867"/>
		<updated>2019-09-21T15:43:47Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Project Team */  members added&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
Fengyi Yang and Elaine Kuan&lt;br /&gt;
Supervised by Prof. Cheng-Chew Lim and Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12866</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12866"/>
		<updated>2019-09-21T15:42:29Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Students */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12865</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12865"/>
		<updated>2019-09-21T15:41:27Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Project Team */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
== &lt;br /&gt;
== Students ==&lt;br /&gt;
 ==&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12864</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12864"/>
		<updated>2019-09-21T15:40:50Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Introduction */  detailed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12863</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12863"/>
		<updated>2019-09-21T15:32:22Z</updated>

		<summary type="html">&lt;p&gt;A1696617: /* Introduction */  draft&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS).&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. &lt;br /&gt;
&lt;br /&gt;
The problem to solve is to reduce the impact on the test accuracy of the network intrusion detector based on Random Forest and SVM by imposing poisoning attacks simulated by statistical-based and gradient-based methods.&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12862</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12862"/>
		<updated>2019-09-21T15:00:58Z</updated>

		<summary type="html">&lt;p&gt;A1696617: Subtitles created&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1696617</name></author>
		
	</entry>
</feed>