AI/ML, AI benefits/risks

Google Gemini for Workspace Vulnerable to Indirect Prompt Injection

Share
Today’s columnist Mohan Koo of DTEX Systems, offers six tips for developing a proactive AI program.,(Adobe Stock)

SC Media reported HiddenLayer researchers found Google's Gemini for Workspace could potentially be compromised by indirect prompt injection intrusions that involve malicious instruction injections on data sources to alter AI output.

Initial testing involved the delivery of emails with hidden instructions to targeted Gmail accounts that prompted Gemini to provide poems instead of a summary of the email, while another proof-of-concept akin to a phishing intrusion successfully prompted Gemini to display a password compromise warning that urged resets at the www[.]g00gle[.]com/reset site, according to the HiddenLayer report.

Additional "Rickroll" injections were also successfully executed by researchers through hidden injections in Google Slides and Google Drive. Such an issue has already been reported to Google, which dismissed it as intended behavior.

The findings come after an earlier HiddenLayer study noting jailbreaking and indirect prompt injection through a Gemini Advanced Google Workspace extension.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

You can skip this ad in 5 seconds