Is there an Increased risk to Cyber Attack with ChatGPT – BlackMamba ChatGPT Polymorphic Malware
There is an abundance of cyber security companies that have been using Artificial Intelligence (AI), Machine Learning (ML) & Large Language Models (LLM’s) since their inceptions for a wide variety of purposes. Recent proof-of-concept (PoC) attacks, such as BlackMamba, which uses generative AI to create adaptive malware, have raised questions about the effectiveness of many current security solutions. Such attacks have also fuelled wider concerns about whether AI technology itself poses a threat to the Cyber Security Landscape.
BlackMamba is a PoC malware that retrieves polymorphic code from a benign remote source using a generative AI. It then executes the malicious code using Python’s exec() function, which remains in memory. BlackMamba’s creators claim that existing Endpoint Detection and Response (EDR) solutions cannot detect it. However, such tactics have been well-known in the cyber security community for years, and modern security vendors have the necessary visibility to identify and prevent such attacks by monitoring malware behaviour to identify and prevent malicious acts.
While attacks like BlackMamba may be alarming, AI is neither inherently good nor evil. As with any other technology, it’s the people who use it that can make it dangerous. The popular media often portrays AI as a monster that will soon turn against its creators, however the Cylon’s (Battlestar Glactica’s AI Robotic Adversaries). However, AI has limitations, and there are concerns about the quality and diversity of datasets used to train AI models. For example, at the time of writing this article Open AI’s Chat GPT Large Language Model is only trained from Data available in 2021, meaning its responses are out dated. Fundamental to understanding AI’s limitations, is recognising that AI can be fooled by sophisticated attacks such as adversarial attacks, it can provide incorrect data if the training model is out dated, and that it cannot make judgment calls.
In conclusion, AI is not a magical technology that can create its own malware to wreak havoc on your business. However, AI tools can be used to build a comprehensive security strategy that should include other security technologies, paired with human intelligence. Understanding AI’s capabilities and limitations is essential to developing effective security solutions that can adapt to the ever-evolving threat landscape.
To learn more about cyber security products and services that One2Call could offer your business check out our website or get in touch by clicking the link below.
Latest News Stories
One2Call Named Finalist at South Yorkshire Apprenticeship Awards 2025
One2Call Ltd has been named a finalist in the South Yorkshire Apprenticeship Awards 2025, shortlisted in the highly competitive SME Employer of the Year category. This recognition highlights One2Call’s ongoing investment in apprenticeships and its commitment to...
3CX Just Got Better: 6 key new features for SMEs explained
3CX Version 20 Update 5 brings with it a host of productivity gains for SMEs. One2Call is a 3CX Platinum Partner, providing expert support, hosting and fully-managed 3CX services to SMEs nationwide. Learn more. For businesses looking for a cost-effective communication...
Cyber Essentials 2025: What SMEs need to know about the upcoming changes
Major updates in the Willow question set & how to stay compliant As of April 28, 2025, significant changes are coming to the Cyber Essentials certification with the introduction of the Willow Question Set, replacing the previous Montpellier version. These updates...
Our Customers
Testimonials
Harry Lynford, Image Data
Great service and very helpful.
Annette, Logo Leisurewear
Happy with excellent service by Pawel and Adam getting my new PC just as I need it to be set up. Thanks.
Andy Cook, Oak Electrical
The install guys were professional and Knowledgeable. The transition from the old phone system to the new one was seamless.