Knowledge Hub

Increasing Threat of Adversarial Machine Learning Attacks

Share on facebook
Share on twitter
Share on linkedin
Share on email
Share on whatsapp

Big Data powered Machine Learning (ML) and Deep Learning have made impressive advances in many fields. Recent years have seen a rapid increase in the use of ML, through which computers can be programmed to identify patterns in information and make increasingly precise predictions over time.  Machine learning tools enable organizations to promptly identify new opportunities as well as potential risks. When it comes to cybersecurity, the machine learning techniques have a lasting impact. However, while machine learning models have many potential benefits, they may be vulnerable to manipulations. This risk is known as ‘adversarial machine learning (AML)’. Let us understand this in detail…

—————————————————————————————————————————————————–

Also Read: Machine Learning in Cybersecurity: The Risks and the Benefits

—————————————————————————————————————————————————–

What is adversarial machine learning attack?

The term ‘adversary’ is used in the field of computer security to describe people or machines that may attempt to penetrate or corrupt a computer network or program. Adversarial machine learning attack is a machine learning technique that attempts to dupe models by supplying deceptive input. The most common reason is to cause a malfunction in the model.

Adversarial machine learning was studied as early as 2004, but the threat has grown significantly in the recent years with the increasing use of AI and ML. Most artificial intelligence researchers would agree that one of the key concerns of machine learning is adversarial attacks that cause trained models to behave in undesired ways.

How do adversarial attacks work?

Adversarial attacks exploit this characteristic to confound machine learning algorithms by manipulating their input data. By adding tiny and inconspicuous patches of pixels to an image, a malicious actor can cause the machine learning algorithm to classify it as something it is not.

For instance, in 2018, a group of researchers showed that by adding stickers to a ‘stop’ sign, they could fool the computer vision system of a self-driving car to mistake it for a speed limit sign- ‘speed limit 45’.

Researchers have also been able to deceive facial-recognition systems by sticking a printed pattern on glasses or hats. Further, there have been successful attempts of tricking speech-recognition systems into hearing phantom phrases by inserting patterns of white noise in the audio.

What is Adversarial Example?

Adversarial example refers to specially crafted input which is design to look “normal” to humans but causes misclassification to a machine learning model.

Types of adversarial attack

Adversarial attacks are classified into two key categories – ‘Black Box Adversarial Attacks’ and ‘White Box Adversarial Attacks’. The white-box adversarial attacks describe scenarios in which the attacker has access to the underlying training policy network of the target model, whereas black-box adversarial attacks describe scenarios in which the attacker does not have complete access to the policy network. So, the black box adversarial attack uses a different model or no model at all to generate adversarial images with the hope that these will transfer to the target model.

Defense against adversarial machine learning

Researchers have proposed a multi-step approach to build a defense against AML…

  • Threat modelling: Estimating the attacker’s goals and capabilities can provide an opportunity to prevent attacks. This is done by creating different models of the same ML system that can withstand these attacks.
  • Attack simulation: Simulating attacks according to the possible attack strategies of the attacker can reveal loopholes.
  • Attack impact evaluation: In this method, one must evaluate the total impact that an attacker can have over the system, thus ensuring preparation in the event of such an attack.
  • Information laundering: By modifying the information extracted by the attacker, this type of defence can render the attack pointless.
  • Adversarial training: In this approach, the engineers of the machine learning algorithm retrain their models on adversarial examples to make them robust against perturbations in the data
  • Defensive distillation: It adds flexibility to an algorithm’s classification process to make it less susceptible to exploitation.

Wrapping up

With machine learning rapidly becoming core to an organizations value proposition, they need to protect themselves against the associated risks. Thus, adversarial machine learning will always be significant for the ethical purpose of protecting ML systems.

As written by Neelesh Kripalani, Chief Technology Officer at Clover Infotech, and published in CXO Today

Leave a comment

Your email address will not be published. Required fields are marked *

Subscribe to Our Blog

Stay updated with the latest trends in the field of IT

Before you go...

We have more for you! Get latest posts delivered straight to your inbox