The cybersecurity firm said it submitted to an LLM a known piece of malware called STEELHOOK that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such the original functionality remained intact and the generated source code was syntactically free of errors.
Armed with this feedback mechanism, the altered malware generated by the LLM made it possible to avoid detections for simple string-based YARA rules.
There are limitations to this approach, the most prominent being the amount of text a model can process as input at one time, which makes it difficult to operate on larger code bases.
Besides modifying malware to fly under the radar, such AI tools could be used to create deepfakes impersonating senior executives and leaders and conduct influence operations that mimic legitimate websites at scale.
Furthermore, generative AI is expected to expedite threat actors' ability to carry out reconnaissance of critical infrastructure facilities and glean information that could be of strategic use in follow-on attacks.
"By leveraging multimodal models, public images and videos of ICS and manufacturing equipment, in addition to aerial imagery, can be parsed and enriched to find additional metadata such as geolocation, equipment manufacturers, models, and software versioning," the company said.
Indeed, Microsoft and OpenAI warned last month that APT28 used LLMs to "understand satellite communication protocols, radar imaging technologies, and specific technical parameters," indicating efforts to "acquire in-depth knowledge of satellite capabilities."