New EchoGram Trick Makes AI Models Accept Dangerous Inputs

Security researchers at HiddenLayer have uncovered a critical vulnerability that exposes fundamental weaknesses in the guardrails protecting today’s most powerful artificial intelligence models. The newly discovered EchoGram attack technique demonstrates how defensive systems safeguarding AI giants like GPT-4, Claude, and Gemini can be systematically manipulated to either approve malicious content or generate false security alerts. […]

The post New EchoGram Trick Makes AI Models Accept Dangerous Inputs appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.