Conflicting AI and the dystopian future of technology


We’re excited to bring Transform 2022 back in person on July 19 and virtually July 20 – August 3. Join AI and data leaders for insightful conversations and exciting networking opportunities. Learn more about Transform 2022

AI is a fast-growing technology that has many benefits for society. However, as with all new technologies, misuse poses a potential risk. One of the most disturbing potential abuses of AI comes in the form of hostile AI attacks.

In a hostile AI attack, AI is used to maliciously manipulate or deceive another AI system. Most AI programs learn, adapt and evolve through behavioral learning. This makes them vulnerable to exploitation as it creates space for anyone to teach an AI algorithm malicious actions, ultimately leading to conflicting results. Cyber ​​criminals and threat actors can exploit this vulnerability for malicious purposes and intent.

While most hostile attacks to date have been conducted by researchers and in labs, they are a growing concern. The occurrence of a hostile attack on AI or a machine learning algorithm points to a deep rift in the AI ​​mechanism. The presence of such vulnerabilities in AI systems can hinder the growth and development of AI and become a significant security risk for people using AI-integrated systems. To fully exploit the potential of AI systems and algorithms, it is therefore crucial to understand and mitigate hostile AI attacks.

Understanding hostile AI attacks

While the modern world we now live in is deeply layered with AI, it has yet to take over the world completely. Since its arrival, AI has faced ethical criticism, leading to a general reluctance to adopt it completely. However, the growing concern that vulnerabilities in machine learning models and AI algorithms could become part of malicious purposes is a huge barrier to the growth of AI/ML.

The fundamental parallels of a hostile attack are essentially the same: manipulating an AI algorithm or an ML model to produce malicious results. However, a hostile attack usually involves two things:

Poisoning: The ML model is fed with inaccurate or misinterpreted data to trick it into an erroneous prediction. Contaminant: The ML model is fed with maliciously designed data to trick an already trained model into performing malicious actions and predictions.

Contamination will most likely become a widespread problem with either method. Since this technique involves a malicious actor injecting or entering negative information, these actions can quickly become a widespread problem with the help of other attacks. In contrast, it seems easy to control and prevent poisoning, as providing a training dataset requires an insider job. It is possible to prevent such threats from within with a zero-trust security model and other network security protocols.

However, it will be a difficult task to protect a business from hostile threats. While typical online security vulnerabilities can be easily addressed using various tools, such as residential proxies, VPNs, or even anti-malware software, hostile AI threats can overcome these vulnerabilities, making these tools too primitive to enable security.

How is hostile AI a threat?

AI is already a well-integrated, important part of critical areas such as finance, healthcare and transportation. Safety issues in these areas can be particularly dangerous for all human life. Since AI is well integrated into human lives, the impact of hostile threats in AI can wreak havoc.

In 2018, a report from the Office of the Director of National Security identified several hostile machine learning threats. Amid the threats identified in the report, one of the most pressing concerns was the potential these attacks had to compromise computer vision algorithms.

Research has so far yielded several examples of AI positioning. One such study involved adding small changes or “disturbances” to an image of a panda, invisible to the naked eye. The changes caused the ML algorithm to identify the panda’s image as that of a gibbon.

Similarly, another study points to the possibility of AI contamination where attackers target the facial recognition cameras with infrared light. This action allowed these attacks to diminish accurate recognition and to impersonate other people.

In addition, attacks from adversaries are also evident in the manipulation of email spam filters. Since email spam filtering tools filter spam emails by tracking certain words, attackers can manipulate these tools by using acceptable words and phrases, giving them access to the recipient’s inbox. Therefore, taking into account these examples and studies, it is easy to identify the impact of hostile AI attacks on the cyber threat landscape, such as:

Adversarial AI opens up the possibility of disabling AI-based security tools such as phishing filters. IoT devices are based on AI. Opposing attacks on them can lead to large-scale hacking attempts. AI tools tend to collect personal information. Attacks can manipulate these tools to reveal collected personal information. AI is part of the immune system. Conflicting attacks on defense instruments can endanger national security. It can cause a new variety of seizures that go unnoticed.

It is increasingly important to maintain security and vigilance against hostile AI attacks.

Is there any prevention?

Given the potential development of AI to make human lives more manageable and much more sophisticated, researchers are already devising several ways to protect systems from adversarial AI. One such method is hostile training, where the machine learning algorithm is pre-trained against positioning and contamination attempts by feeding it with potential perturbations.

In the case of computer vision algorithms, the algorithms will come preloaded with images and their altercations. For example, a visual automotive algorithm designed to identify the stop sign learned all possible changes to the stop sign, such as stickers, graffiti, or even missing letters. The algorithm will correctly identify the phenomena, despite the manipulations of the attacker. However, this method is not foolproof, as it is impossible to identify all possible opponent attacks.

The algorithm uses non-intrusive image quality features to distinguish between legitimate and hostile input. The technique could potentially neutralize hostile machine learning importer and alternation before reaching the classification information. Another such method involves pre-processing and noise reduction, which automatically removes possible hostile noise from the input.

Conclusion

Despite its widespread use in the modern world, AI has yet to take over. While machine learning and AI have managed to expand and even dominate some areas of our daily lives, they continue to evolve significantly. Until researchers can fully recognize the potential of AI and machine learning, a gaping hole remains in how to mitigate hostile threats within AI technology. However, research on this issue is still ongoing, mainly because it is critical to the development and adoption of AI.

Waqas is a cybersecurity journalist and writer.

DataDecision makers

Welcome to the VentureBeat Community!

DataDecisionMakers is where experts, including the technical people who do data work, can share data-related insights and innovation.

If you want to read about the very latest ideas and up-to-date information, best practices and the future of data and data technology, join us at DataDecisionMakers.

You might even consider contributing an article yourself!

Read more from DataDecisionMakers

This post Conflicting AI and the dystopian future of technology

was original published at “https://venturebeat.com/2022/04/03/adversarial-ai-and-the-dystopian-future-of-tech/”

No Comment

Leave a reply

Your email address will not be published. Required fields are marked *