Veracode, a leader in application risk management, has released its 2025 GenAI Code Security Report, highlighting significant security issues in AI-generated code. The study examined 80 coding tasks across over 100 large language models, finding that AI introduces security vulnerabilities in 45% of cases. Despite advancements in generating syntactically correct code, the security performance of AI has stagnated.
GenAI models often choose insecure coding methods, as highlighted by Jens Wessling, CTO at Veracode, who expressed concerns over the trend of “vibe coding,” where developers rely on AI without specifying security requirements. This shift allows AI to make critical security decisions, often resulting in vulnerabilities. AI also aids attackers by rapidly identifying and exploiting vulnerabilities, increasing the threat to traditional security measures.
The report found Java to be particularly risky for AI code generation, with a security failure rate exceeding 70%. Other languages like Python, C#, and JavaScript also showed significant risks. LLMs struggled with security issues like cross-site scripting and log injection, failing to secure code in a majority of cases.
Veracode suggests that organizations integrate comprehensive risk management programs to mitigate these risks, emphasizing the need for tools that detect and fix vulnerabilities early in the development process. The company underscores the importance of security evolving alongside AI capabilities to prevent accumulating security debt.