Tuesday, May 20, 2025

AI cybercrimes will proliferate in 2025, expert says

- Advertisement -

“THERE is no doubt that the trend of cyber criminals using AI for attacks will continue in 2025,” Jeff Crume, an IBM Fellow and Distinguished Engineer, said in a YouTube video.

In the video, released just after new, is part of IBM’s Technology Channel, which also includes top-ics like AI and quantum computing, is provided to stakeholders, including the infotech media. Crume, known for his expertise in the field, emphasized the increasing sophistication of cyber threats, particularly those driven by advancements in artificial intelligence (AI).

The IBM Distinguished Engineer highlighted that as AI technologies evolve, attackers are finding it easier to gain access to systems through legitimate means rather than traditional hacking methods. This shift necessitates a stronger focus on authentication measures.

- Advertisement -

“We have got to do a better job of authentication because attackers are finding it easier to log in than to hack in,” he explained, adding that he anticipates that as AI-generated phishing attacks be-come more prevalent, traditional detection methods—such as identifying poor grammar and spelling errors—will no longer suffice.

“The shocking story of the deepfaked CFO on a video call that convinced an employee to wire $25M to an attacker is probably the most well-known,” Crume said, saying that this incident underscores the potential damage that deepfakes can inflict on organizations.

Next he went on the necessity for AI governance frameworks that ensure ethical usage and man-agement of AI-related technologies is crucial because, he says that as “organizations grow they also must develop policies that not only guide the deployment of AI but also safeguard against potential vulnerabilities introduced by these systems.”

Another critical concern is the emergence of “harvest-now, decrypt-later” attacks. He warned that these attacks pose a significant threat, particularly with advancements in quantum computing. “The prospect of harvest-now, decrypt-later attacks is already a concern,” he noted, urging organizations to prioritize the transition to post-quantum cryptography to safeguard sensitive data.

He also emphasized the risks associated with Shadow AI, which refers to the unauthorized use of AI tools by employees. This trend poses significant security threats as it can lead to data breaches and noncompliance, particularly with sensitive information being input into public AI platforms. Organi-zations must develop strategic policies to manage and monitor AI usage effectively to mitigate these risks

Using a reverse white board, as he wrote the various cybersecurity topics on screen, Crume under-scored the importance of governance frameworks for AI technologies as they become more integrat-ed into cybersecurity practices. He emphasized that organizations must ensure ethical usage and management of AI systems to avoid introducing new vulnerabilities.

Despite predictions that ransomware attacks might decline due to companies refusing to pay ran-soms, Crume indicated that ransomware will remain a significant concern. He suggested that while the tactics may evolve, vigilance and proactive measures will be essential in combating this ongo-ing threat.

Author

- Advertisement -

Share post: