ETFOptimize | High-performance ETF-based Investment Strategies

Quantitative strategies, Wall Street-caliber research, and insightful market analysis since 1998.


ETFOptimize | HOME
Close Window

Pillar Security Uncovers Novel Attack Vector That Embeds Malicious Backdoors in Model Files on Hugging Face

TEL EVIV, Israel, July 09, 2025 (GLOBE NEWSWIRE) -- Pillar Security, a leading company in AI security, discovered a novel supply chain attack vector that targets the AI inference pipeline. This novel technique, termed "Poisoned GGUF Templates," allows attackers to embed malicious instructions that are processed alongside legitimate inputs, compromising AI outputs.

The vulnerability affects the widely used GGUF (GPT-Generated Unified Format), a standard for AI deployment with over 1.5 million files distributed on public platforms like Hugging Face. By manipulating these templates, which define the conversational structure for an LLM, attackers can create a persistent compromise that affects every user interaction while remaining invisible to both users and security systems.

“We’re still in the early days of understanding the full range of AI supply chain security considerations,” said Ziv Karliner, CTO and Co-founder of Pillar Security. “Our research shows how the trust that powers platforms and open-source communities—while essential to AI progress—can also open the door to deeply embedded threats. As the AI ecosystem matures, we must rethink how AI assets are vetted, shared, and secured.”

Malicious GGUF Templates - Attack Surface - by Pillar Security

How the "Poisoned GGUF Template" Attack Works

This attack vector exploits the trust placed in community-sourced AI models and the platforms that host them. The mechanism allows for a stealthy, persistent compromise:

  • Attackers embed malicious, conditional instructions directly within a GGUF file’s chat template, a component that formats conversations for the AI model.
  • The poisoned model is uploaded to a public repository. Attackers can exploit the platform’s UI to display a clean template online while the actual downloaded file contains the malicious version, bypassing standard reviews.
  • The malicious instructions lie dormant until specific user prompts trigger them, at which point the model generates a compromised output.

"What makes this attack so effective is the disconnect between what's shown in the repository interface and what's actually running on users’ machines," added Pillar’s Ariel Fogel, who led the research. "It remains undetected by casual testing and most security tools."

The AI Inference Pipeline: A New Attack Surface

The “Poisoned GGUF Templates” attack targets a critical blind spot in current AI security architectures. Most security solutions focus on validating user inputs and filtering model outputs, but this attack occurs in the unmonitored space between them.

Because the malicious instructions are processed within the trusted inference environment, the attack evades existing defenses like system prompts and runtime monitoring. An attacker no longer needs to bypass the front door with a clever prompt; they can build a backdoor directly into the model file. This capability redefines the AI supply chain as a primary vector for compromise, where a single poisoned model can be integrated into thousands of downstream applications.

Responsible Disclosure

Pillar Security followed a responsible disclosure process, sharing its findings with vendors, including Hugging Face and LM Studio, in June 2025. The responses indicated that the platforms do not currently classify this as a direct platform vulnerability, placing the responsibility of vetting models on users. This stance highlights a significant accountability gap in the AI ecosystem.

Mitigation Strategies

The primary defense against this attack vector is the direct inspection of GGUF files to identify chat templates containing uncommon or non-standard instructions. Security teams should immediately:

  • Audit GGUF Files: Deploy practical inspection techniques to examine GGUF files for suspicious template patterns. Look for unexpected conditional logic (if/else statements), hidden instructions, or other manipulations that deviate from standard chat formats.
  • Move Beyond Prompt-Based Controls: This attack fundamentally challenges current AI security assumptions. Organizations must evolve beyond a reliance on system prompts and input/output filtering toward comprehensive template and processing pipeline security.
  • Implement Provenance and Signing: A critical long-term strategy is to establish model provenance. This can include developing template allowlisting systems to ensure only verified templates are used in production.

The Pillar platform discovers and flags malicious GGUF files and other types of risks in the template layer.

Read the full report: https://www.pillar.security/blog/llm-backdoors-at-the-inference-level-the-threat-of-poisoned-templates

About Pillar Security

Pillar Security is a leading AI-security platform, providing companies full visibility and control to build and run secure AI systems. Founded by experts in offensive and defensive cybersecurity, Pillar secures the entire AI lifecycle - from development to deployment - through AI Discovery, AI Security Posture Management (AI-SPM), AI Red Teaming, and Adaptive Runtime Guardrails. Pillar empowers organizations to prevent data leakage, neutralize AI-specific threats, and comply with evolving regulations.

Contact person:

Hadar Yakir
info@pillar.security

A photo accompanying this announcement is available at https://www.globenewswire.com/NewsRoom/AttachmentNg/d767a026-13f9-419d-827f-7ade3b92a24d


Primary Logo

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.


 

IntelligentValue Home
Close Window

DISCLAIMER

All content herein is issued solely for informational purposes and is not to be construed as an offer to sell or the solicitation of an offer to buy, nor should it be interpreted as a recommendation to buy, hold or sell (short or otherwise) any security.  All opinions, analyses, and information included herein are based on sources believed to be reliable, but no representation or warranty of any kind, expressed or implied, is made including but not limited to any representation or warranty concerning accuracy, completeness, correctness, timeliness or appropriateness. We undertake no obligation to update such opinions, analysis or information. You should independently verify all information contained on this website. Some information is based on analysis of past performance or hypothetical performance results, which have inherent limitations. We make no representation that any particular equity or strategy will or is likely to achieve profits or losses similar to those shown. Shareholders, employees, writers, contractors, and affiliates associated with ETFOptimize.com may have ownership positions in the securities that are mentioned. If you are not sure if ETFs, algorithmic investing, or a particular investment is right for you, you are urged to consult with a Registered Investment Advisor (RIA). Neither this website nor anyone associated with producing its content are Registered Investment Advisors, and no attempt is made herein to substitute for personalized, professional investment advice. Neither ETFOptimize.com, Global Alpha Investments, Inc., nor its employees, service providers, associates, or affiliates are responsible for any investment losses you may incur as a result of using the information provided herein. Remember that past investment returns may not be indicative of future returns.

Copyright © 1998-2017 ETFOptimize.com, a publication of Optimized Investments, Inc. All rights reserved.