<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1702353</id>
	<title>Projects - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://projectswiki.eleceng.adelaide.edu.au/projects/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=A1702353"/>
	<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php/Special:Contributions/A1702353"/>
	<updated>2026-04-26T08:41:38Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.31.4</generator>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=14879</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=14879"/>
		<updated>2020-06-08T16:27:47Z</updated>

		<summary type="html">&lt;p&gt;A1702353: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Framework ==&lt;br /&gt;
1. Problem formulation &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem to Solve&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
To reduce the impact on test accuracy of the network intrusion detector based on Random Forest and SVM against poisoning attacks simulated by statistical and gradient-based methods &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is Given&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Network Traffic Datasets - KDD 99&amp;#039;, UNSW-NB15, CIC IDS 2017 etc &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Mechanisms of Poisoning Attacks - Random Label Flips, Feature Manipulation, Jacobian Saliency Map Attack (JSMA), Fast Gradient Sign Method (FGSM) etc &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Libraries of Machine Learning Algorithms - scikit-learn, MATLAB Statistics &amp;amp; Machine Learning toolbox, cleaverhans library &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Constraints&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Time &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Complexity &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Memory &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Develop Conceptual Model &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Network Intrusion Detection System&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
3. Identify Relevant Approaches &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Collecting &amp;amp; Synthesis of Existing Data &amp;amp; Metadata&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Dataset Analysis&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
5. Generate Specific Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Test Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Research &amp;amp; Findings &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Synthesis of Results &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase I: Model Generation &amp;amp; Simple Attack Simulation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
In phase I we will perform literature review to solidify understanding of Adversarial Machine Learning &amp;amp; to investigate what has been done in this field. We will then move on to evaluate and analyze commonly used datasets for network intrusion detection, including benchmark sets such as KDD 99&amp;#039; and NSL KDD, and other dataset sets such as UNSW NB-15 and CIC IDS 2017. After the choice of dataset is justified, we move on to perform data preprocessing to the dataset and evaluate the dataset using python&amp;#039;s sklearn library and WEKA, both which are commonly used machine learning tools. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Attack: &amp;lt;br /&amp;gt;&lt;br /&gt;
1. Random Label Flipping &amp;lt;br /&amp;gt;&lt;br /&gt;
Select random samples and flip their labels. &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Statistical Based Poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
Manipulate features values. &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Optimisation Based Poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
Initialise attack point and move it along the direction of the steepest gradient of the outer objective function. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defence: &amp;lt;br /&amp;gt;&lt;br /&gt;
1. KNN Relabelling &amp;lt;br /&amp;gt;&lt;br /&gt;
Relabel the neighbours of a query point (in the training set) based on the label of the query point itself. &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Label Propagation &amp;lt;br /&amp;gt;&lt;br /&gt;
Semi-supervised method to propagate labels to an unlabelled set using a small set of verified data.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. Hybrid Method &amp;lt;br /&amp;gt;&lt;br /&gt;
Combining the methods in 1. and 2. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase III: Model Security Evaluation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognition, vol. 84, pp. 317–331, 2018. ​&amp;lt;br /&amp;gt;&lt;br /&gt;
[2] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in 2018 IEEE Symposium on Security and Privacy (SP), IEEE, 2018, pp. 19–35.​ &amp;lt;br /&amp;gt;&lt;br /&gt;
[3] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in ICML’12 Proceedings of the 29th International Conference on International Conference on Machine Learning, USA: Omnipress, 2012, pp. 1467–1474.​ &amp;lt;br /&amp;gt;&lt;br /&gt;
[4] M. Ghifary, W. Kleijn, and M. Zhang, “Deep hybrid network with good out-of-sample object recognition,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, May 2014.​ &amp;lt;br /&amp;gt;&lt;br /&gt;
[5] A. Paudice, L. Munoz-Gonzalez, and E. Lupu, “Label sanitization against label flipping poisoning attacks,&amp;quot; Springer Verlag, 2019, pp. 5-15.​ &amp;lt;br /&amp;gt;&lt;br /&gt;
[6] R. Taheri, R. Javidan, M. Shojafar, Z. Pooranian, A. Miri, and M. Conti, “On defending against label flipping attacks on malware detection systems,&amp;quot; Mar. 2020. arXiv: 1908. 04473v2 [cs.LG]. &amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=14858</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=14858"/>
		<updated>2020-06-08T15:57:35Z</updated>

		<summary type="html">&lt;p&gt;A1702353: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Framework ==&lt;br /&gt;
1. Problem formulation &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem to Solve&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
To reduce the impact on test accuracy of the network intrusion detector based on Random Forest and SVM against poisoning attacks simulated by statistical and gradient-based methods &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is Given&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Network Traffic Datasets - KDD 99&amp;#039;, UNSW-NB15, CIC IDS 2017 etc &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Mechanisms of Poisoning Attacks - Random Label Flips, Feature Manipulation, Jacobian Saliency Map Attack (JSMA), Fast Gradient Sign Method (FGSM) etc &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Libraries of Machine Learning Algorithms - scikit-learn, MATLAB Statistics &amp;amp; Machine Learning toolbox, cleaverhans library &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Constraints&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Time &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Complexity &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Memory &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Develop Conceptual Model &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Network Intrusion Detection System&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
3. Identify Relevant Approaches &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Collecting &amp;amp; Synthesis of Existing Data &amp;amp; Metadata&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Dataset Analysis&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
5. Generate Specific Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Test Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Research &amp;amp; Findings &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Synthesis of Results &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase I: Model Generation &amp;amp; Simple Attack Simulation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
In phase I we will perform literature review to solidify understanding of Adversarial Machine Learning &amp;amp; to investigate what has been done in this field. We will then move on to evaluate and analyze commonly used datasets for network intrusion detection, including benchmark sets such as KDD 99&amp;#039; and NSL KDD, and other dataset sets such as UNSW NB-15 and CIC IDS 2017. After the choice of dataset is justified, we move on to perform data preprocessing to the dataset and evaluate the dataset using python&amp;#039;s sklearn library and WEKA, both which are commonly used machine learning tools. Then we will build our model from scratch &amp;amp; without libraries. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
Attack: &amp;lt;br /&amp;gt;&lt;br /&gt;
1. Random Label Flipping &amp;lt;br /&amp;gt;&lt;br /&gt;
Select random samples and flip their labels. &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Statistical Based Poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
Manipulate features values. &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Optimisation Based Poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
Initialise attack point and move it along the direction of the steepest gradient of the outer objective function. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defence: &amp;lt;br /&amp;gt;&lt;br /&gt;
1. KNN Relabelling &amp;lt;br /&amp;gt;&lt;br /&gt;
Relabel the neighbours of a query point (in the training set) based on the label of the query point itself. &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Label Propagation &amp;lt;br /&amp;gt;&lt;br /&gt;
Semi-supervised method to propagate labels to an unlabelled set using a small set of verified data.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. Hybrid Method &amp;lt;br /&amp;gt;&lt;br /&gt;
Combining the methods in 1. and 2. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase III: Model Security Evaluation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] “Wild patterns: Ten years after the rise of adversarial machine learning,” Pattern Recognition, vol. 84, pp. 317–331, 2018.​&lt;br /&gt;
[2] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” in 2018 IEEE Symposium on Security and Privacy (SP), IEEE, 2018, pp. 19–35.​&lt;br /&gt;
[3] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in ICML’12 Proceedings of the 29th International Conference on International Conference on Machine Learning, USA: Omnipress, 2012, pp. 1467–1474.​&lt;br /&gt;
[4] M. Ghifary, W. Kleijn, and M. Zhang, “Deep hybrid network with good out-of-sample object recognition,” in ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, May 2014.​&lt;br /&gt;
[5] A. Paudice, L. Munoz-Gonzalez, and E. Lupu, “Label sanitization against label flipping poisoning attacks,&amp;quot; Springer Verlag, 2019, pp. 5-15.​&lt;br /&gt;
[6] R. Taheri, R. Javidan, M. Shojafar, Z. Pooranian, A. Miri, and M. Conti, “On defending against label flipping attacks on malware detection systems,&amp;quot; Mar. 2020. arXiv: 1908. 04473v2 [cs.LG].&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12907</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12907"/>
		<updated>2019-09-22T12:38:32Z</updated>

		<summary type="html">&lt;p&gt;A1702353: /* Related Work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Framework ==&lt;br /&gt;
1. Problem formulation &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem to Solve&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
To reduce the impact on test accuracy of the network intrusion detector based on Random Forest and SVM against poisoning attacks simulated by statistical and gradient-based methods &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is Given&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Network Traffic Datasets - KDD 99&amp;#039;, UNSW-NB15, CIC IDS 2017 etc &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Mechanisms of Poisoning Attacks - Random Label Flips, Feature Manipulation, Jacobian Saliency Map Attack (JSMA), Fast Gradient Sign Method (FGSM) etc &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Libraries of Machine Learning Algorithms - scikit-learn, MATLAB Statistics &amp;amp; Machine Learning toolbox, cleaverhans library &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Constraints&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Time &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Complexity &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Memory &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Develop Conceptual Model &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Network Intrusion Detection System&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
3. Identify Relevant Approaches &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Collecting &amp;amp; Synthesis of Existing Data &amp;amp; Metadata&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Dataset Analysis&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
5. Generate Specific Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Test Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Research &amp;amp; Findings &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Synthesis of Results &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase I: Model Generation &amp;amp; Simple Attack Simulation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
In phase I we will perform literature review to solidify understanding of Adversarial Machine Learning &amp;amp; to investigate what has been done in this field. We will then move on to evaluate and analyze commonly used datasets for network intrusion detection, including benchmark sets such as KDD 99&amp;#039; and NSL KDD, and other dataset sets such as UNSW NB-15 and CIC IDS 2017. After the choice of dataset is justified, we move on to perform data preprocessing to the dataset and evaluate the dataset using python&amp;#039;s sklearn library and WEKA, both which are commonly used machine learning tools. Then we will build our model from scratch &amp;amp; without libraries. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Phase III: Model Security Evaluation Using Simulated Network Topology&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12906</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12906"/>
		<updated>2019-09-22T12:37:58Z</updated>

		<summary type="html">&lt;p&gt;A1702353: /* Related Work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Framework ==&lt;br /&gt;
1. Problem formulation &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem to Solve&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
To reduce the impact on test accuracy of the network intrusion detector based on Random Forest and SVM against poisoning attacks simulated by statistical and gradient-based methods &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is Given&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Network Traffic Datasets - KDD 99&amp;#039;, UNSW-NB15, CIC IDS 2017 etc &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Mechanisms of Poisoning Attacks - Random Label Flips, Feature Manipulation, Jacobian Saliency Map Attack (JSMA), Fast Gradient Sign Method (FGSM) etc &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Libraries of Machine Learning Algorithms - scikit-learn, MATLAB Statistics &amp;amp; Machine Learning toolbox, cleaverhans library &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Constraints&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Time &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Complexity &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Memory &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Develop Conceptual Model &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Network Intrusion Detection System&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
3. Identify Relevant Approaches &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Collecting &amp;amp; Synthesis of Existing Data &amp;amp; Metadata&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Dataset Analysis&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
5. Generate Specific Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Test Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Research &amp;amp; Findings &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Synthesis of Results &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &amp;lt;br /&amp;gt;&lt;br /&gt;
In phase I we will perform literature review to solidify understanding of Adversarial Machine Learning &amp;amp; to investigate what has been done in this field. We will then move on to evaluate and analyze commonly used datasets for network intrusion detection, including benchmark sets such as KDD 99&amp;#039; and NSL KDD, and other dataset sets such as UNSW NB-15 and CIC IDS 2017. After the choice of dataset is justified, we move on to perform data preprocessing to the dataset and evaluate the dataset using python&amp;#039;s sklearn library and WEKA, both which are commonly used machine learning tools. Then we will build our model from scratch &amp;amp; without libraries. &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12905</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12905"/>
		<updated>2019-09-22T12:35:18Z</updated>

		<summary type="html">&lt;p&gt;A1702353: /* Research Framework */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Framework ==&lt;br /&gt;
1. Problem formulation &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem to Solve&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
To reduce the impact on test accuracy of the network intrusion detector based on Random Forest and SVM against poisoning attacks simulated by statistical and gradient-based methods &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is Given&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Network Traffic Datasets - KDD 99&amp;#039;, UNSW-NB15, CIC IDS 2017 etc &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Mechanisms of Poisoning Attacks - Random Label Flips, Feature Manipulation, Jacobian Saliency Map Attack (JSMA), Fast Gradient Sign Method (FGSM) etc &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Libraries of Machine Learning Algorithms - scikit-learn, MATLAB Statistics &amp;amp; Machine Learning toolbox, cleaverhans library &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Constraints&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Time &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Complexity &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Memory &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Develop Conceptual Model &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Network Intrusion Detection System&amp;#039;&amp;#039;&amp;#039; &lt;br /&gt;
&lt;br /&gt;
3. Identify Relevant Approaches &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. Collecting &amp;amp; Synthesis of Existing Data &amp;amp; Metadata&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Dataset Analysis&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
5. Generate Specific Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
6. Test Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
7. Research &amp;amp; Findings &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
8. Synthesis of Results &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12904</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12904"/>
		<updated>2019-09-22T12:33:38Z</updated>

		<summary type="html">&lt;p&gt;A1702353: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Research Framework ==&lt;br /&gt;
1. Problem formulation &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Problem to Solve&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
To reduce the impact on test accuracy of the network intrusion detector based on Random Forest and SVM against poisoning attacks simulated by statistical and gradient-based methods &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;What is Given&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Network Traffic Datasets - KDD 99&amp;#039;, UNSW-NB15, CIC IDS 2017 etc &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Mechanisms of Poisoning Attacks - Random Label Flips, Feature Manipulation, Jacobian Saliency Map Attack (JSMA), Fast Gradient Sign Method (FGSM) etc &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Libraries of Machine Learning Algorithms - scikit-learn, MATLAB Statistics &amp;amp; Machine Learning toolbox, cleaverhans library &amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Constraints&amp;#039;&amp;#039;&amp;#039; &amp;lt;br /&amp;gt;&lt;br /&gt;
i. Time &amp;lt;br /&amp;gt;&lt;br /&gt;
ii. Complexity &amp;lt;br /&amp;gt;&lt;br /&gt;
iii. Memory &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
2. Develop Conceptual Model &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Identify Relevant Approaches &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Collecting &amp;amp; Synthesis of Existing Data &amp;amp; Metadata&amp;lt;br /&amp;gt;&lt;br /&gt;
5. Generate Specific Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
6. Test Hypotheses &amp;lt;br /&amp;gt;&lt;br /&gt;
7. Research &amp;amp; Findings &amp;lt;br /&amp;gt;&lt;br /&gt;
8. Synthesis of Results &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12903</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12903"/>
		<updated>2019-09-22T12:20:00Z</updated>

		<summary type="html">&lt;p&gt;A1702353: /* Related Work */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &amp;lt;br /&amp;gt;&lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12901</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12901"/>
		<updated>2019-09-22T12:13:28Z</updated>

		<summary type="html">&lt;p&gt;A1702353: /* Motivations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &amp;lt;br /&amp;gt;&lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &amp;lt;br /&amp;gt;&lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &amp;lt;br /&amp;gt;&lt;br /&gt;
4. Most work used outdated datasets &amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12900</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12900"/>
		<updated>2019-09-22T12:12:48Z</updated>

		<summary type="html">&lt;p&gt;A1702353: /* Motivations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &lt;br /&gt;
4. Most work used outdated datasets&lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
	<entry>
		<id>https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12899</id>
		<title>Projects:2019s2-23102 Secure Machine Learning Against DoS Induced by Poisoning Attacks</title>
		<link rel="alternate" type="text/html" href="https://projectswiki.eleceng.adelaide.edu.au/projects/index.php?title=Projects:2019s2-23102_Secure_Machine_Learning_Against_DoS_Induced_by_Poisoning_Attacks&amp;diff=12899"/>
		<updated>2019-09-22T12:10:41Z</updated>

		<summary type="html">&lt;p&gt;A1702353: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Introduction ==&lt;br /&gt;
Poisoning attacks are one type of adversarial machine learning technique which aims to fool the target learning-based system by injecting &amp;quot;false&amp;quot; data into the system&amp;#039;s training set, in order to maximise the system&amp;#039;s misclassification rate. This project analyses how poisoning attacks can compromise the functionality of a network intrusion detection system (NIDS) and proposes countermeasures.&lt;br /&gt;
&lt;br /&gt;
In the target system, Denial of Service (DoS) in Appication Layer can be caused if the detectors misclassify legitimate users into malicious ones. The problem to solve in this project is to reduce DoS caused by this kind of misclassification of the network intrusion detector by imposing poisoning attacks simulated by statistical-based and gradient-based methods. The machine learning algorithms to look into are those of highest accuracies in current research, e.g., Random Forest, SVM and some classifier ensemble techniques.&lt;br /&gt;
&lt;br /&gt;
Canadian Institute for Cybersecurity Intrusion Detection System Dataset (CICIDS 2017) is chosen to be the network traffic dataset to work on.&lt;br /&gt;
&lt;br /&gt;
==Motivations==&lt;br /&gt;
1. Network intrusion detection is a significant piece of the data security framework &lt;br /&gt;
2. Wide application of machine learning techniques in network intrusion detection &lt;br /&gt;
3. Online services heavily rely upon machine learning, thus exposes learning algorithms to the threat of data poisoning &lt;br /&gt;
4. Most work used outdated datasets &lt;br /&gt;
&lt;br /&gt;
== Project Team ==&lt;br /&gt;
=== Students ===&lt;br /&gt;
&amp;lt;li&amp;gt;Fengyi Yang&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Elaine Kuan&lt;br /&gt;
=== Supervisors ===&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Cheng-Chew Lim&amp;lt;br /&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Prof. Ali Babar&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
Three main objectives of this project are:&amp;lt;br /&amp;gt;&lt;br /&gt;
1. To develop learning-based detectors for more than one type of network intrusion.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. To simulate an intelligent and adaptive adversary to attack the learning-based system, which means the attack mechanism can be transferable to other &amp;quot;peer&amp;quot; datasets, not only the target one.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. To implement a robust proactive defense mechanism to the imposed poisoning attacks.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Related Work ==&lt;br /&gt;
Phase I: Model Generation &amp;amp; Simple Attack Simulation &lt;br /&gt;
Phase II: State-of-the-Art Attacks &amp;amp; Defence Methods Implementation &lt;br /&gt;
Phase III: Model Security Evaluation Using Simulated Network Topology &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Methodology ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;/div&gt;</summary>
		<author><name>A1702353</name></author>
		
	</entry>
</feed>