ARTIFICIAL INTELLIGENCE: Its dual impact on cybersecurity

- Advertisement -

ARTIFICIAL Intelligence (AI) has emerged as a transformative force across various industries, exerting influence on both sides of the cybersecurity spectrum. This influence is wielded not only by cyber protectors but also by hackers, underscoring the significance of understanding both the potential benefits and abuses of AI.

The Kaspersky Cyber Security Weekend, themed “Deus Ex Machina: Setting Secure Directives for Smart Machines,” delved into the intricate landscape of AI’s role in cybersecurity. The discussions aimed to illuminate how to harness AI’s potential positively while guarding against its misuse.

The advent of ChatGPT in November 2022 sparked discussions about AI’s capacity to potentially surpass human capabilities and the resulting implications. However, a key limitation is evident: AI heavily relies on data, sourced either from the internet or its own servers, to generate responses and answers to user queries. These queries, known as ‘prompts,’ prompt artificial neural networks to search and compose messages.

- Advertisement -spot_img

“Artificial intelligence can never surpass natural intelligence,” remarks Vitaly Kamluk, the perceptive Head of Kaspersky’s Global Research and Analysis Team (GReAT). He emphasizes that robust cyber awareness and education can counteract cybercriminals exploiting AI for social engineering.

During the conference, Kamluk presented a unique proposition, including a demonstration of ChatGPT’s capabilities and limitations. This popular AI service from the OpenAI initiative highlighted a distinct facet of AI’s impact on cybersecurity: its ability–or lack thereof–to truly comprehend questions.

In an exclusive interview with Malaya Business Insight, Kamluk elucidated AI’s process of ‘tokenizing’ queries–breaking them down into segments and assigning varying importance to individual words or data strings. This mechanistic approach to responding to queries enables AI to generate responses “without thinking.” The system processes the query using scraped internet data and constructs an answer.

Kamluk emphasizes, “This is all data, devoid of context or experience.” This underscores why AI can be leveraged for orchestrating cyberattacks–culprits can veil their actions in the cloak of technology, distancing themselves from accountability.

The syndrome of “suffering distancing”

This phenomenon, termed “suffering distancing syndrome,” exemplifies how AI acts as an intermediary, allowing wrongdoers to detach from the emotional toll of their actions. Drawing parallels between physical assaults and virtual thefts, Kamluk illustrates how AI-facilitated cybercrime blurs the lines between actors and actions. This dissociation potentially mitigates criminals’ culpability by attributing blame to the technology itself, heralding a new accountability paradigm.

Another intriguing dimension is the concept of “responsibility delegation.” As AI-driven automation assumes a greater role in cybersecurity processes, human operators may feel less accountable for security outcomes. In organizational contexts, the presence of AI-backed defense mechanisms might inadvertently foster complacency and shift accountability away from human oversight. The ascendancy of fully autonomous systems exacerbates this phenomenon, akin to a human driver’s waning attention while relying on autopilot.

Managing, not stopping AI’s presence 

Kamluk’s insights culminate in a set of guidelines for harnessing AI’s advantages while mitigating potential pitfalls:

  1. Accessibility: Restricting anonymous access to intelligent systems and maintaining a transparent history of generated content can counteract misuse. Implementing a structured reporting mechanism involving AI-based initial verification, followed by human validation when necessary, ensures responsible utilization.
  2. Regulations: Drawing inspiration from the European Union’s (EU) endeavors to label AI-generated content, regulatory frameworks can empower users to swiftly identify AI-derived content. Licensing AI-related activities, reminiscent of dual-use technologies, can introduce control and oversight to minimize harm.
  3. Education: Nurturing awareness about detecting, validating, and reporting AI-generated content is imperative. Educating individuals, spanning from students to software developers, about responsible AI usage can curtail potential abuse.

As Kamluk adeptly phrases it, AI embodies duality–a potent tool capable of mimicking human creation. This duality, while holding transformative potential, also harbors risks. The ascent of generative AI underscores the urgency of formulating secure directives that balance innovation with ethics.

Continuing its commitment to shaping cybersecurity discourse, Kaspersky will foster these dialogues at the Kaspersky Security Analyst Summit (SAS) 2023, scheduled from October 25th to 28th in Phuket, Thailand. This event, uniting distinguished anti-malware researchers, global law enforcement agencies, Computer Emergency Response Teams, and senior executives across sectors, remains a pivotal platform for navigating the evolving cybersecurity landscape.

Author

Share post: