A newly discovered and critical vulnerability in PyTorch, a widely used open-source machine learning framework, has put developers, researchers, and organizations at serious risk. Identified as CVE 2025 32434, this security flaw has been classified as Critical with a CVSS 4.0 score of 9.3, meaning it can be easily exploited without requiring user interaction or privileges.

The vulnerability allows Remote Code Execution (RCE) under specific conditions, making it an urgent issue for anyone working with PyTorch-based models. Below, we break down what the vulnerability entails, who is at risk, how to mitigate it, and why it matters for the broader AI and cybersecurity community.


What is CVE 2025 32434?

CVE 2025 32434 is a deserialization vulnerability found in the torch.load() function of the PyTorch framework. This function is widely used to load serialized AI models for inference and further training. Typically, developers use the weights_only=True parameter to prevent potentially harmful code from being executed. However, this safeguard is no longer reliable.

According to the official GitHub security advisory, an attacker can bypass this protection and craft a malicious model file that, when loaded, executes arbitrary code on the victim's machine—even if weights_only=True is used. The vulnerability has been traced to incorrect implementation in the deserialization mechanism.

References:

NVD - CVE-2025-32434
torch.load with weights_only=True RCE
# Description I found a Remote Command Execution (RCE) vulnerability in the PyTorch. When load model using torch.load with weights_only=True, it can still achieve RCE. # Background knowledge…

Why This is a Serious Threat

The vulnerability affects all PyTorch versions up to and including 2.5.1. It is fixed in version 2.6.0, which has already been released via pip.

The implications of this flaw are vast:

  • It enables remote attackers to gain control over any system using unpatched PyTorch versions.
  • The vulnerability can be exploited via tampered AI model files, which can be uploaded to public repositories or injected into software supply chains.
  • It requires no user privileges, no authentication, and no user interaction, making it highly dangerous in real-world settings.

The CVSS vector string confirms its severity:
CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N

This translates to:

  • AV:N – Attack is possible over the network
  • AC:L – Attack complexity is low
  • PR:N – No privileges required
  • UI:N – No user interaction required
  • VC:H / VI:H / VA:H – High impact on confidentiality, integrity, and availability

How the Exploit Works

The vulnerability exploits a subtle flaw in PyTorch’s deserialization logic. When the torch.load() function is called with the weights_only=True argument, it is supposed to limit the data being loaded to primitive types like dictionaries, tensors, and lists.

However, the researcher Ji’an Zhou discovered that it is possible to manipulate the model file in such a way that weights_only=True does not restrict the execution of malicious code. In effect, the parameter intended to safeguard systems actually enables exploitation under these manipulated conditions.

This revelation directly contradicts PyTorch’s documentation, which previously endorsed weights_only=True as a best practice for secure model loading.

What are Weights in AI? | TEDAI San Francisco
Weights in AI refer to the numerical values that determine the strength and direction of connections between neurons in artificial neural networks. These weights are akin to synapses in biological neural networks and play a crucial role in the network’s ability to learn and make predictions. They are essential parameters that control the influence of input data on the output by setting the standards for signal strength within the network. Typically initialized randomly, weights are learned traits that guide the propagation of signals through the network, ultimately impacting the accuracy of the model’s predictions.
Hackers Can Now Exploit AI Models via PyTorch – Critical Bug Found
A major security flaw has been discovered in PyTorch, the widely used open-source machine learning framework. Identified as CVE-2025-32434, this

Who Is at Risk?

Any organization, developer, or researcher who uses the PyTorch framework and loads AI models via torch.load() is at risk—especially if they rely on third-party or publicly available models.

Environments that are particularly vulnerable include:

  • Cloud-based machine learning systems
  • Edge devices using PyTorch inference
  • Federated learning frameworks
  • Model hub integrations (e.g., Hugging Face, private model zoos)
  • AI software pipelines using automated model ingestion

Because the attack can be executed with low complexity and no prerequisites, attackers can automate the exploit, injecting malicious models into popular repositories and compromising systems at scale.


Immediate Security Recommendations

The PyTorch development team has released version 2.6.0, which contains the fix for CVE 2025 32434. Users must take the following actions immediately to stay protected:

1. Upgrade to PyTorch 2.6.0

Use the following pip command to upgrade:

pip install --upgrade torch

2. Audit Existing Models

Inspect all models that have been downloaded or shared—particularly from public repositories. Validate their sources and integrity. Treat any unfamiliar or community-shared model with suspicion until verified.

3. Avoid Using torch.load() with weights_only=True

Until further security hardening is implemented, avoid relying solely on the weights_only=True flag. Consider alternative methods for secure model loading, such as:

  • Using trusted internal model repositories
  • Loading models in sandboxed environments
  • Using custom deserializers with strict parsing

4. Monitor Official Channels

Keep track of updates from PyTorch’s security page and GitHub advisory board:

torch.load with weights_only=True RCE
# Description I found a Remote Command Execution (RCE) vulnerability in the PyTorch. When load model using torch.load with weights_only=True, it can still achieve RCE. # Background knowledge…

Broader Implications for AI Security

This vulnerability is a wake-up call for the artificial intelligence and cybersecurity communities. It underscores a fundamental reality: machine learning frameworks are software, and all software can have vulnerabilities.

The CVE 2025 32434 case highlights several important issues:

  • Deserialization attacks are not limited to traditional web applications. They are a threat in machine learning pipelines as well.
  • Security assumptions in documentation can become outdated. PyTorch’s official advice to use weights_only=True became a liability.
  • Supply chain attacks in the AI space are becoming more plausible, especially when model sharing is encouraged across platforms.

Final Thoughts

The discovery of CVE 2025 32434 demonstrates how even mature and widely trusted frameworks like PyTorch can become vulnerable in unexpected ways. It also reflects the evolving threat landscape in machine learning, where attackers are beginning to exploit complex model-loading mechanisms.

For developers and teams in the AI space, security should be an integral part of the development and deployment pipeline. Merely trusting built-in safeguards is no longer sufficient.

Immediate upgrading to PyTorch 2.6.0, avoiding torch.load() in risky contexts, and remaining vigilant about model provenance are key steps to staying secure.


References