AI Weekly: The Russia-Ukraine Conflict Is A Test Case For AI In Warfare

Join today’s leading executives at the Data Summit on March 9. Register here.

As the Russian invasion of Ukraine continues unabated, it becomes a test case for technology’s role in modern warfare. Destructive software – believed to be the work of Russian intelligence – has compromised hundreds of computers at Ukrainian government agencies. On the other hand, a loose group of hackers is targeting important Russian websites, which appear to be taking down web pages for Russia’s largest stock exchange and the Russian Ministry of Foreign Affairs.

AI has also been proposed – and is being used – as a way to decisively help turn the tide. As Fortune writes, Ukraine uses Turkish-made autonomous TB2 drones to drop laser-guided bombs and artillery strikes. The Russian Lantset drone, which the country has reportedly used in Syria and could use in Ukraine, has similar capabilities, allowing it to navigate and crash into preselected targets.

AI is not strictly assigned to the battlefield. Social media algorithms, such as TikTok’s, have become a central part of the information war, exposing fragments of attacks to millions of people. These algorithms have proven to be a double-edged sword and amplify deceptive content such as video game clips edited to look like footage on the ground and fake live streams of invading troops.

Meanwhile, Russian troll farms have used AI to generate human faces for fake propaganda personas on Twitter, Facebook, Instagram and Telegram. A campaign involving about 40 fake accounts was recently identified by Facebook’s parent company Meta, which said the accounts mainly posted links to pro-Russian, anti-Ukraine content.

Some vendors have suggested other uses of the technology, such as developing anomaly detection apps for cybersecurity and using natural language processing to identify disinformation. Snorkel AI, a data science platform, has made its services available for free to “support federal efforts” to “analyze adversary signals and communications, identify high-value information, and use it to guide diplomacy and decision-making,” among other use cases.

Some in the AI ​​community support the use of the technology to some extent, pointing to AI’s potential to advance cyber defense and denial-of-service attacks, for example. But others reject its application, arguing that it sets a harmful, ethically problematic precedent.

“We urgently need to identify the vulnerabilities of today’s machine learning…algorithms, which are now weaponized by cyber warfare,” wrote Lê Nguyên Hoang, an AI researcher helping build an open source video recommendation called Tournesol. Twitter.

It seems that Kai-Fu Lee correctly predicted that AI would be the third revolution in warfare, after gunpowder and nuclear weapons. Autonomous weapons are one aspect, but AI also has the potential to scale data analysis, misinformation and content curation beyond what was historically possible in major conflicts.

As the Brookings Institute points out in a 2018 report, advances in AI are making synthetic media fast, inexpensive, and easy to produce. AI audio and video misinformation — “deepfakes” — is already available through apps like Face2Face, which can map one person’s expressions to another face in a video. Other tools can manipulate media from any world leader or even synthesize street scenes to appear in a different environment.

Elsewhere, demonstrating the analytics potential of AI, geospatial data firm Spaceknow claims it was able to detect military activity in the Russian city of Yelna, including the movement of heavy equipment, as of December of last year. The Pentagon’s Project Maven — to which Google has controversially contributed expertise — uses machine learning to detect and classify objects of interest in drone footage.

The North Atlantic Treaty Organization (NATO) — which activated its Response Force as a defensive measure for the first time last week in response to the Russian attack — launched an AI strategy and a $1 billion fund last October to develop new AI defense technologies. . In a proposal, NATO emphasized the need for “cooperation and cooperation” among its members on “all matters related to AI for transatlantic defense and security”, including on human rights and humanitarian law.

AI technology in warfare, for better or for worse, seems likely to become a fixture of conflicts outside Ukraine. A critical mass of countries have put their shoulders to the wheel, including the US. The Department of Defense (DoD) plans to invest $874 million this year in AI-related technologies as part of the military’s $2.3 billion science and technology research budget.

The insurmountable challenge will be to ensure – assuming it is even possible – that AI is applied ethically in these circumstances. In an article for The Cove, the professional development platform for the Australian military, Aaron Wright examines whether AI for war can be ethical by definition. He points out that the impact of weapons of war is often not fully understood until after the weapons themselves have been deployed, arguing that the members of The Manhattan Project felt justified in their work to invent an atomic bomb.

“Ultimately, whether one considers the use of AI in war as ethical… [relies] on your inherent optimism about AI,” he says. “You can take a utilitarian approach and consider all the lives saved by precise and calculated robot attacks without the loss of human soldiers, or take a virtue-ethical approach and complain about killer robots wiping humans from existence based on how a series of numbers in their internal algorithms…Since the use of AI on the battlefield is seemingly unavoidable…careful and rigorous standards…a required step to make AI for war ethical, but not a guarantee that it will be.”

The DoD has made an attempt at this, providing guidance to AI military technology contractors recommending suppliers to perform damage modeling and address the effects of flawed data, plan for system audits, and confirm that new data will affect system performance. not deteriorate. Whether contractors — and perhaps more importantly, adversaries — adhere to these kinds of guidelines, however, is another matter. So whether the alleged advantage AI offers in warfare outweighs the consequences. Ukraine will provide some answers; we can only hope this happens with minimal casualties.

For AI reporting, send news tips to Kyle Wiggers – and subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thank you for reading,

Kyle Wiggers

AI Senior Staff Writer

VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more

This post AI Weekly: The Russia-Ukraine Conflict Is A Test Case For AI In Warfare

was original published at “https://venturebeat.com/2022/03/04/ai-weekly-the-russia-ukraine-conflict-is-a-test-case-for-ai-in-warfare/”