As Russiaâ€™s invasion of Ukraine continues unabated, itâ€™s becoming a test case for the role of technology in modern warfare. Destructive software â€” presumed to be the work of Russian intelligence â€” has compromised hundreds of computers at Ukrainian government agencies. On the other side, a loose group of hackers has targeted key Russian websites, appearing to bring down webpagesÂ for Russiaâ€™s largest stock exchange as well as the Russian Foreign Ministry.
AI, too, has been proposed â€” and is being used â€” as a way to help decisively turn the tide. As FortuneÂ writes, Ukraine has been using autonomous Turkish-made TB2 drones to drop laser-guided bombs and artillery strikes. Russiaâ€™s Lantset drone, which the country reportedly used in Syria and could use inÂ Ukraine, has similar capabilities, enabling it to navigate and crash into preselected targets.
AI hasnâ€™t been consigned strictly to the battlefield. Social media algorithms like TikTokâ€™s have become a central part of the information war, surfacing clips of attacks for millions of people. These algorithms have proven to be a double-edge sword, amplifying misleading content like video game clips doctored to look like on-the-ground footage and bogus livestreams of invading forces.
Meanwhile,Â Russian troll farmsÂ have used AI to generate human faces for fake, propagandist personas on Twitter, Facebook, Instagram, and Telegram. A campaign involving around 40 false accounts was recently identified by Meta, Facebookâ€™s parent company, which said that the accounts mainly posted links to pro-Russia, anti-Ukraine content.
Some vendors have proposed other uses of the technology, like developing anomaly detection apps for cybersecurity and using natural language processing to identify disinformation. Snorkel AI, a data science platform, has made its services available for free to â€œsupport federal effortsâ€ to â€œanalyze signals and adversary communications, identify high-value information, and use it to guide diplomacy and decision-making,â€ among other use cases.
Some in the AI community support the use of the technology to a degree, pointing to AIâ€™s potential to promote cyber defense and denial-of-service attacks, for example. But others decry their application, arguing that it sets a harmful, ethically problematic precedent.
â€œWe urgently must identify the vulnerabilities of todayâ€™s machine learning â€¦ algorithms, which are now weaponized by cyberwarfare,â€ wrote LÃª NguyÃªn Hoang, an AI researcher whoâ€™s helping to build an open source video recommendation called Tournesol, on Twitter.
Kai-Fu Lee, it would seem, rightlyÂ predictedÂ that AI would be the third revolution in warfare, after gunpowder and nuclear weapons. Autonomous weapons are one aspect, but AI also has the potential to scale data analysis, misinformation, and content curation beyond what was possible in major conflicts historically.
As the Brookings Institute points out in a 2018 report, advances in AI are making synthetic media quick, cheap, and easy to produce. AI audio and video disinformation â€” â€œdeepfakesâ€ â€” are already available through apps like Face2Face, which allows for one personâ€™s expressions to be mapped onto another face in a video. Other tools can manipulate media of any world leader, or even synthesize street scenes to appear in a different environment.
Elsewhere, demonstrating AIâ€™s analytics potential, geospatial data firm Spaceknow claims it was able to detect military activity in the Russian town of Yelna beginning last December, including the movement of heavy equipment. The Pentagonâ€™s Project Maven â€” to which GoogleÂ controversiallyÂ contributed expertise â€” taps machine learning to detect and classify objects of interest in drone footage.
Reading the writing on the wall, theÂ North Atlantic Treaty Organization (NATO) â€” which activated its Response Force for the first time last week as a defensive measure in response to Russiaâ€™s assault â€” last October launched an AI strategy and a $1 billion fund to develop new AI defense technologies. In aÂ proposal, NATO emphasized the need for â€œcollaboration and cooperationâ€ among members on â€œanyÂ matters relating to AI for transatlantic defense and security,â€ including as they relate to human rights and humanitarian law.
AI tech in warfare, for better or worse, seems likely to become a fixture of conflicts beyond Ukraine. A critical mass of countries have thrown their weight behind it, including the U.S. â€” the Department of Defense (DoD) plans to invest $874 million this year in AI-related technologies as a part of the armyâ€™s $2.3 billion science and technology research budget.
See FULL STORY at Venture Beat.