Two cybersecurity firms have tested OpenAI’s newly released GPT-5, revealing that the AI model remains vulnerable to manipulation despite its advanced safety features. According to separate analyses by NeuralTrust and SPLX, specialised techniques can bypass GPT-5’s guardrails, raising concerns about its readiness for high-stakes enterprise use.
Two cybersecurity firms have tested OpenAI’s newly released GPT-5, revealing that the AI model remains vulnerable to manipulation despite its advanced safety features. According to separate analyses by NeuralTrust and SPLX, specialised techniques can bypass GPT-5’s guardrails, raising concerns about its readiness for high-stakes enterprise use.