{"dataType":"CVE_RECORD","dataVersion":"5.1","cveMetadata":{"cveId":"CVE-2025-49847","assignerOrgId":"a0819718-46f1-4df5-94e2-005712e83aaa","state":"PUBLISHED","assignerShortName":"GitHub_M","dateReserved":"2025-06-11T14:33:57.800Z","datePublished":"2025-06-17T20:04:40.893Z","dateUpdated":"2025-06-18T13:41:11.407Z"},"containers":{"cna":{"title":"llama.cpp Vulnerable to Buffer Overflow via Malicious GGUF Model","problemTypes":[{"descriptions":[{"cweId":"CWE-119","lang":"en","description":"CWE-119: Improper Restriction of Operations within the Bounds of a Memory Buffer","type":"CWE"}]},{"descriptions":[{"cweId":"CWE-195","lang":"en","description":"CWE-195: Signed to Unsigned Conversion Error","type":"CWE"}]}],"metrics":[{"cvssV3_1":{"attackComplexity":"LOW","attackVector":"NETWORK","availabilityImpact":"HIGH","baseScore":8.8,"baseSeverity":"HIGH","confidentialityImpact":"HIGH","integrityImpact":"HIGH","privilegesRequired":"NONE","scope":"UNCHANGED","userInteraction":"REQUIRED","vectorString":"CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H","version":"3.1"}}],"references":[{"name":"https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8wwf-w4qm-gpqr","tags":["x_refsource_CONFIRM"],"url":"https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-8wwf-w4qm-gpqr"},{"name":"https://github.com/ggml-org/llama.cpp/commit/3cfbbdb44e08fd19429fed6cc85b982a91f0efd5","tags":["x_refsource_MISC"],"url":"https://github.com/ggml-org/llama.cpp/commit/3cfbbdb44e08fd19429fed6cc85b982a91f0efd5"}],"affected":[{"vendor":"ggml-org","product":"llama.cpp","versions":[{"version":"< b5662","status":"affected"}]}],"providerMetadata":{"orgId":"a0819718-46f1-4df5-94e2-005712e83aaa","shortName":"GitHub_M","dateUpdated":"2025-06-17T20:04:40.893Z"},"descriptions":[{"lang":"en","value":"llama.cpp is an inference of several LLM models in C/C++. Prior to version b5662, an attacker‐supplied GGUF model vocabulary can trigger a buffer overflow in llama.cpp’s vocabulary‐loading code. Specifically, the helper _try_copy in llama.cpp/src/vocab.cpp: llama_vocab::impl::token_to_piece() casts a very large size_t token length into an int32_t, causing the length check (if (length < (int32_t)size)) to be bypassed. As a result, memcpy is still called with that oversized size, letting a malicious model overwrite memory beyond the intended buffer. This can lead to arbitrary memory corruption and potential code execution. This issue has been patched in version b5662."}],"source":{"advisory":"GHSA-8wwf-w4qm-gpqr","discovery":"UNKNOWN"}},"adp":[{"metrics":[{"other":{"type":"ssvc","content":{"timestamp":"2025-06-18T13:40:43.172535Z","id":"CVE-2025-49847","options":[{"Exploitation":"poc"},{"Automatable":"no"},{"Technical Impact":"total"}],"role":"CISA Coordinator","version":"2.0.3"}}}],"title":"CISA ADP Vulnrichment","providerMetadata":{"orgId":"134c704f-9b21-4f2e-91b3-4a467353bcc0","shortName":"CISA-ADP","dateUpdated":"2025-06-18T13:41:11.407Z"}}]}}