Share

Machine Learning Automations Can Be Exploited

Machine Learning Automations Can Be Exploited

Machine learning has revolutionized the world of technology, allowing for more efficient and effective decision-making processes. While machine learning models have been beneficially used to classify and analyze data, identify patterns and make predictions, it is not immune to vulnerabilities. In this blog post, we are going to explore ways in which machine learning automations can be deceived and exploited by cybercriminals.

What Is Adversarial Machine Learning?

Machine learning models are designed to learn from data and make predictions based on it, but if that data is manipulated or corrupted, the model’s predictions can be incorrect, in a process called adversarial machine learning.

Reports by TechTalk mention how adversarial machine learning has become an active area of research, given the growing concern that vulnerabilities can be exploited as AI continues to evolve. “Adversarial examples exploit the way artificial intelligence algorithms work to disrupt their behavior”, commented Ben Dickson, founder of TechTalk.

Adversarial attacks can be applied to machine learning models in a variety of ways, including data poisoning, model evasion, and model extraction.

  • Data Poisoning.

Data poisoning involves feeding a machine learning model with malicious data during the training phase. By doing this, an attacker can manipulate the model to make incorrect predictions when presented with new data.

For example, notes from 2019’s U.S. National Security Commission on Artificial Intelligence interim report showed that “by placing a few small stickers on the ground, researchers […] could cause a self-driving car to move into the opposite lane of traffic.”

  • Model Evasion.

Model evasion involves manipulating input data at runtime so the model makes incorrect predictions. These attacks are particularly dangerous because the attacker does not need to have access to the training data or the model itself, and they are often used to bypass security systems that use machine learning models to detect threats.

For example, a research by Google demonstrated that “adding particular noise to an image could alter the model’s forecast for image recognition.”

  • Model Extraction.

Model extraction involves extracting information from a machine learning model, such as its parameters or training data. This information can be used to create a replica of the model or to understand how it makes predictions.

A study led by professor Dawn Song at UC Berkeley proved “they could extract social security numbers from a language processing model that had been trained with a large volume of emails, some of which contained sensitive personal information.”

  • Data Tampering.

Machine learning models can also be deceived and exploited by changing the input data to create a subtle change that is undetectable to the human eye but can completely fool a machine learning model.

Researchers at Samsung and some universities in the U.S. found that “by making small tweaks to stop signs, they could make them invisible to the computer vision algorithms of self-driving cars”, which could easily cause an accident in the event of an attack.

  • Adversarial Examples.

Adversarial examples are inputs that are specifically designed to cause a machine learning model to make incorrect predictions. Making small modifications to the input may have a large impact on the model’s output.

For instance, researchers at UC Berkeley managed to “manipulate the behavior of automated speech recognition systems” using adversarial examples. This could include smart assistants, such as Amazon Alexa, Apple Siri, or Microsoft Cortana.

Is There Protection Against Adversarial Attacks?

It is important to use robust and resilient machine learning models in order to be protected against adversarial machine learning attacks. You start by using multiple models or models that have been trained with different data sets. Additionally, it is important to regularly test them to ensure that they are not vulnerable to these attacks.

Human Factor

Considering the human factor is also key. In many cases, attackers use social engineering tactics to gain access to sensitive information or systems, either by convincing an employee to provide login credentials or to download malicious files. That’s why the importance of training employees to recognize and respond appropriately to social engineering attacks, cannot be underestimated by organizations.

Defensive Techniques

Another approach is using defensive techniques such as anomaly detection and intrusion detection systems to protect against adversarial attacks. Anomaly detection involves monitoring system behavior and identifying unusual or unexpected activity. On the other hand, intrusion detection systems use machine learning models to identify potential threats and alert security teams to take action. Combining defensive techniques with resilient machine learning models is the smart choice to be protected against adversarial attacks.

While machine learning has brought about many benefits, it has been proven it is not without vulnerabilities. Adversarial machine learning attacks used to deceive and exploit machine learning models are always evolving, which can have serious consequences for organizations that rely on these models for decision-making processes.

To protect against these attacks, it is important to use robust and resilient machine learning models, effective defensive techniques, and training employees in organizations to recognize and respond appropriately against social engineering attacks to ensure the security of their systems and data.

 

SOURCES:

Share post: