skip to content

AI is making it easy for people to commit crimes, says US’ cybersecurity top official

Jen Easterly, the Chief of the Cybersecurity and Infrastructure Security Agency (CISA), has highlighted a rather alarming fact—that generative AI empowers current cybercriminals and lowers the bar for entry into nefarious activities. In short, it is making it easier for people to indulge in criminal activities in the digital realm.

As AI continues to evolve, cybercriminals have been made more capable than ever before of engaging a wide array of malicious attacks, which can range from traditional phishing and spamming to more sophisticated acts like blackmail, election interference through misinformation campaigns, and even terrorism, Easterly told the online news portal Axios in an interview.

Easterly highlighted the significant increase in risk posed by the rapid advancement of AI and how its unpredictable nature and potent capabilities can exasperate a situation that is worsening by the day.

The agility and power of AI-powered cyberattacks introduce several new layers of uncertainty into the cybersecurity landscape, necessitating several proactive measures to mitigate emerging threats.

While CISA lacks regulatory authority over private businesses, Easterly highlights why government agencies need to collaborate with tech companies and work on developing robust cybersecurity practices.

The recent launch of the “secure by design” pledge, which major tech firms have endorsed, indicates the collective commitment to make cybersecurity more resilient against evolving threats.

Easterly’s extensive background in the US military and global cybersecurity makes her a voice that needs to be taken seriously. Naturally, when she takes a proactive stance on protecting critical infrastructure, including election systems, it would be a good idea for world leaders to take note.

While she is confident that most electoral mechanisms are safe against direct AI-fueled attacks, Easterly suggests that we stay vigilant about the potential for generative AI to exacerbate distrust and undermine an election’s integrity. Such attacks also diminish public trust in well-meaning public institutions.

What complicates matters is the lack of global norms that can properly govern cyber warfare, which allows threat actors to exploit vulnerabilities in civilian critical infrastructure.

Despite the challenges posed by AI-driven threats, there are reasons to be optimistic about technology’s potential in cybersecurity. As it turns out, AI can be beaten by AI. If managed and harnessed correctly, AI can aid in identifying vulnerabilities and fortifying old or legacy systems against cyberattacks. At the same time, we are preparing to replace them with more up-to-date systems.

Easterly highlights the importance of proactive cybersecurity practices, including routine patching and robust password protocols, as the basic foundation of defenses against AI-based threats. What’s also important is a culture that makes the public acutely aware of cybersecurity concerns. Organizations, too, should be able to mitigate the risks posed by AI-infused cyber threats and safeguard against potential disruptions to critical systems and infrastructure.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed