Chapter 8 – Evading Intrusion Detection Systems with Adversarial Machine Learning
- Can you briefly explain why overtraining a machine learning model is not a good idea?
By overtraining a machine learning model by training data too well, we train the model in a way that negatively impacts the performance of the model on new data. It is also referred to as overfitting.
- What is the difference between overfitting and underfitting?
Overfitting refers to overtraining the model, while underfitting refers to a model that can neither model the training data nor generalize to new data.
- What is the difference between an evasion and poisoning attack?
In an evasion adversarial attack, the attacker try many different samples to identify a learning pattern to bypass it; while in poisoning attacks, the attacker poisons the model in the training phase.
- How does adversarial clustering work?
Adversarial clustering occurs when the attackers manipulates the input data (adding small percentage of attack samples) so the newly added sample can hide within the existing clusters.
- What type of adversarial attack is used to avoid the intrusion detection system?
The attack used in the demonstration is called the Jacobian-based Saliency Map attack.
- Is the preceding attack an evasion or poisoning attack?
It is a poisoning adversarial attack.