How to protect AI from cyber-attacks – start with the data

A photo of different medicinal drugs, tablets and pills on blue background.

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today!

Artificial intelligence is definitely a game-changer when it comes to security. Not only does it greatly expand the ability to manage and monitor systems and data, it adds a level of dynamism to both protection and remediation that greatly increases the difficulty and rewards of setting up a successful attack.

But AI is still a digital technology, meaning it can also be compromised, especially when faced with an intelligent attack. As the world becomes more dependent on systems intelligence and autonomy for everything from business processes to transportation to healthcare, the consequences of a security breach increase, even if the likelihood decreases.

>> Special Report: Intelligent Security <

For this reason, the enterprise needs to take a close look at their AI implementations to date, as well as their ongoing strategies, to see where vulnerabilities are and what can be done to eliminate them.

According to Robotics Biz, the most common type of attack on AI systems to date is the infiltration of large-scale algorithms to manipulate their predictive output. In most cases, this involves feeding false or malicious entries (also known as data) into the system to give it a skewed, if not completely false, picture of reality.

Any AI connected to the internet can be affected in this way, often over a period of time, so that the effects are gradual and the damage is long-lasting. The best remedy is to streamline both the AI ​​algorithm and the process in which data is ingested, and to maintain strict control over data conditioning to detect erroneous or malicious data before it enters the chain.

Access AI through its data sources

The need for massive amounts of data is actually one of AI’s greatest weaknesses, as it creates a situation where security can be breached without attacking the AI ​​itself. A recent series of articles from CSET, the Center for Security and Emerging Technology, highlighted the growing number of ways white hat hackers have demonstrated that AI can be compromised by targeting its data sources.

This can be used to misdirect autonomous cars or accelerate to dangerous levels, meaning it can also lead to business processes suddenly becoming confused. However, unlike traditional cyber-attacks, the goal is usually not to destroy the AI ​​or disable systems, but to take control of the AI ​​to the advantage of the attacker, for example to divert data or money or simply to cause problems.

Image-based training data is among the most vulnerable, said Dan Boneh, a Stanford University cryptography professor. Typically, a hacker uses the fast gradient drawing method (FGSM), which creates pixel-level changes in training images, undetectable by the human eye, causing confusion in training models. These ‘conflicting examples’ are very difficult to detect, but can nevertheless alter the results of algorithms in many different ways, even if the attacker only has access to the input, training data, and output. And as AI algorithms become more and more dependent on open source tools, hackers will also gain more access to algorithms.

How to protect your AI

What can the company do to protect itself? According to Akriti Galav of Great Learning and SEO consultant Saket Gupta, the three most important steps to take now are:

Maintain the strictest possible security protocols throughout the data environment. Ensure that all records of all operations performed by AI are logged and put in an audit trail. Implement strong access control and authentication.

Organizations must also pursue longer-term strategic goals, such as developing data protection policies specifically for AI training, educating staff about AI risks and recognizing erroneous outcomes, and maintaining an ongoing risk assessment mechanism that is both dynamic and and future-oriented.

No digital system can be 100% secure, no matter how intelligent it is. The dangers inherent in compromised AI are more subtle but no less pervasive than with traditional platforms, so the company should update its security policies to reflect this new reality now rather than waiting for the damage to be done.

And as with legacy technology, securing AI is a two-pronged effort: reducing resources and the likelihood of attacks, and minimizing damage and restoring credibility as quickly as possible when the inevitable happens.

VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more about membership.

This post How to protect AI from cyber-attacks – start with the data

was original published at “”