OpenAI's ChatGPT for macOS could have been compromised with persistent in-memory spyware in attacks involving the exploitation of a vulnerability, which has since been addressed, The Hacker News reports.
Such an attack technique dubbed "SpAIware" could be leveraged to facilitate continuous exfiltration of all inputs provided by the targeted user to ChatGPT, according to cybersecurity researcher Johann Rehberger.
"This attack chain was quite interesting to put together, and demonstrates the dangers of having long-term memory being automatically added to a system, both from a misinformation/scam point of view, but also regarding continuous communication with attacker-controlled servers," said Rehberger, who urged users of the ChatGPT AI chatbot to consistently evaluate memories stored by the system.
The findings follow a study detailing the new MathPrompt AI jailbreaking method that enabled the circumvention of large language models' safety mechanisms through the abuse of their symbolic mathematics capabilities.