SecAlliance Report reveals the cybercrime potential of AI driven chatbots

assets/files/images/03_08_23/bigstock-chatbot-online-customer-servic-474392245.jpg

A report from SecAlliance, the cyber threat intelligence services provider, reveals the security risks posed by the rising number of AI driven Chatbots.
  
The report, Security and Innovation in the Age of the Chatbot, discusses the rising fears surrounding increased adoption of AI, and the technological advancements that have led to a growing number of Chatbots.


The sector analysis suggests current-generation LLM (Large Language Model)-enabled generative AI tools (such as ChatGPT, BingBot and BardAI) have demonstrable applications in three distinct areas of concern to defenders: phishing campaign support, information operation enablement and malware development.
  
Since the launch of ChatGPT in November 2022, the generative pre-trained transformer (GPT) model has rapidly grown a 100m user base - faster than the growth of any social media platform. With this exponential growth comes fears of the technology being used to create malicious code, even among those with little to no understanding or skills in coding.
  
SecAlliance suggests that current-generation LLM-enabled generative AI tools are likely to provide lower-skilled threat actors with the ability to generate low- to moderate-complexity malicious code - without requiring significant programming experience or resources.
  
And while OpenAI (the research institute behind ChatGPT) ostensibly prohibits use of its tools for purposes that violate its content policy, many of the safeguards it has implemented to prevent misuse have been shown to be easily circumvented.
  
Certainly, since ChatGPT’s release, cybercriminal and ethical hacker interest in such generative AI tools has spiked. But given the technology’s current technical limitations, SecAlliance assesses that most high-impact malicious use cases for generative AI are unlikely to be leveraged at scale in the short-to medium-term.
  
Nicholas Vidal, Strategic Cyber Threat Intelligence Team Lead at SecAlliance says: "While current LLM-tools present considerable promise and considerable risk, our research shows that their broader security impacts remain muted by limitations in the underlying technology that enables their use. However, their pace of innovation is rapid, and future advancements are likely to expand the scope of possibilities for misuse."
  
Already, SecAlliance has noted ChatGPT can generate “semi-reliable” text to complete tasks commonly associated with phishing campaigns and other inauthentic behaviour operations, with motivated users circumventing the language model’s content filtering mechanisms. Cybercriminals are leveraging LLMs to generate highly convincing human-language output.
  
SecAlliance assesses that using generative AI to produce high complexity malware (including polymorphic variants) is not something we will see in the near-term, due to quality control issues and the high threshold of programming ability required for successful campaign execution. A current limitation for persistent cybercriminals is the inability to validate code generated by LLMs, which remains a challenge for would-be polymorphic malware developers.
  
CyberArk researchers point out that this remains a key issue for such malware developers, who, they argue, must be skilled enough to validate all possible modulation scenarios to produce exploit code capable of being executed.
And the UK’s NCSC assesses that even those with significant ability are likely to be able to develop malicious code from scratch more efficiently than by iterating, validating and appending code produced by generative AI.
  
SecAlliance assesses the recently released GPT-4 is likely to be more reliable and capable of handling more nuanced instruction than its earlier generation counterparts, potentially further reducing barriers to its use by malicious actors.
  
In the longer term – and given the rapid evolution of the technology - future improvements in the sophistication of generative AI and the experience of threat actors in exploiting them for malicious purposes will ultimately expand their potential impact, requiring increased investment from defenders in AI-enabled detection and response capabilities.
  
According to a study conducted by Blackberry in February 2023, approximately 50% of IT decision makers polled stated they expect a successful cyberattack leveraging ChatGPT to be reported within the year. Of the same group, over 80% stated they planned to invest in AI-driven cybersecurity products within two years.
  
Business leaders are increasingly viewing AI-enabled defences as a critical means of defending against modern-day attack techniques, including those leveraging novel applications of AI. As Jeff Sims, the researcher behind HYAS’s polymorphic keylogger, BlackMamba, suggests, organisations must not only “remain vigilant” and “keep their security measures up to date” but also “adapt to new threats that emerge by operationalising cutting-edge research being conducted in this space.”
  
In other words, they must learn to fight fire with fire.

Add a Comment

No messages on this article yet

Editorial: +44 (0)1892 536363
Publisher: +44 (0)208 440 0372
Subscribe FREE to the weekly E-newsletter