The rise of generative AI tools like ChatGPT has increased the potential for a wide range of attackers to target elections around the world in 2024, according to a new report by cybersecurity giant Crowdstrike. Both state-linked hackers and allied so-called “hacktivists” are increasingly experimenting with ChatGPT and other AI tools, enabling a wider range of actors to carry out cyberattacks and scams. This includes hackers linked to Russia, China, North Korea, and Iran, who have been testing new ways to use these technologies against the U.S., Israel, and European countries. With half the world’s population set to vote in 2024, the use of generative AI to target elections could be a “huge factor,” says Adam Meyers, head of counter-adversary operations at CrowdStrike. So far, Crowdstrike analysts have been able to detect the use of these models through comments in the scripts that would have been placed there by a tool like ChatGPT. But, Meyers warns, “this is going to get worse throughout the course of the year.” If state-linked actors continue to improve their use of AI, “it’s really going to democratize the ability to do high-quality disinformation campaigns” and speed up the tempo at which they’re able to carry out cyberattacks, Meyers says. “Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will highly likely use such tools to conduct [information operations] against elections in 2024,” the report’s authors say. “Politically active partisans within those countries holding elections will also likely use generative AI to create disinformation to disseminate within their own circles.”
Read more at: https://time.com