AI Libraries at Risk: Uncovering Hidden Vulnerabilities in Popular Tools
Imagine a world where the very tools powering AI innovation could be turned against us. This isn't science fiction; it's a reality we've uncovered. Our team has identified critical vulnerabilities in three widely-used open-source AI/ML Python libraries from tech giants like Apple, Salesforce, and NVIDIA. But here's where it gets controversial: these flaws could allow remote code execution (RCE) when loading seemingly innocent model files. And this is the part most people miss: these libraries are used in popular models on HuggingFace, with tens of millions of downloads collectively.
The Culprits:
- NeMo (NVIDIA): A powerful framework for building diverse AI models, hiding a vulnerability that's been lurking since at least 2020.
- Uni2TS (Salesforce): A library powering Salesforce's Morai, a time series analysis model, with a flaw that could impact hundreds of thousands of downloads.
- FlexTok (Apple & EPFL VILAB): A tool for image processing, with a vulnerability that, while less widespread, still poses a significant risk.
The Root Cause:
These libraries rely on metadata to configure complex models. A shared third-party library, Hydra, instantiates classes using this metadata. The vulnerable versions blindly execute the provided data as code, opening the door for attackers to embed malicious code in model metadata. When these models are loaded, the code executes automatically.
A Race Against Time:
We responsibly disclosed these vulnerabilities to the vendors in April 2025, giving them time to patch before public disclosure. Here's how they responded:
- NVIDIA: Released a fix in NeMo 2.3.2 (CVE-2025-23304).
- Salesforce: Deployed a patch on July 31, 2025 (CVE-2026-22584).
- Apple & EPFL VILAB: Updated FlexTok's code and documentation to mitigate the risk.
The Bigger Picture:
While these specific vulnerabilities are addressed, the underlying issue remains. The AI/ML ecosystem is rapidly evolving, with new libraries and formats constantly emerging. Each new tool introduces potential attack surfaces. Security researchers at JFrog have already demonstrated vulnerabilities in applications using these newer formats, exploiting techniques like XSS and path traversal.
A Call to Action:
This discovery highlights the critical need for robust security practices in AI development. We urge developers to:
- Scrutinize third-party libraries: Understand their security implications and potential vulnerabilities.
- Implement strict input validation: Sanitize all user-provided data, including model metadata.
- Adopt secure coding practices: Follow best practices for secure software development.
- Stay informed: Keep up-to-date with the latest security advisories and patches.
The Future of AI Security:
As AI becomes increasingly integrated into our lives, ensuring its security is paramount. We need a collaborative effort from developers, researchers, and vendors to build a more secure AI future. Let's not wait for a catastrophic breach to force our hand. The time to act is now.
What do you think? Are we doing enough to secure AI development? Share your thoughts in the comments below.