Large model security
-
AI Security: Building an Enterprise AI Security System Based on ATT&CK Methodology
This paper takes the AI security threat matrix as the core framework, and based on the mature ATT&CK methodology, it systematically elaborates on the full lifecycle security threats faced by AI systems, including key attack techniques such as data poisoning, model extraction, privacy leakage, confrontation samples, and cue word injection, etc., and puts forward the corresponding defense strategies and enterprise landing solutions, providing AI engineers, security engineers, and CSOs with professional technical Reference.
-
Large model security: open source framework Guardrails security fence introduction and analysis
OpenGuardrails is the first complete open source enterprise-grade large model security guardrail platform, supporting 119 languages, unified LLM architecture, configurable sensitivity policies, and multi-cloud deployment. This report deeply analyzes its core technology innovation, application scenarios, deployment models, performance benchmarking and future development, providing security compliance guidelines for AI applications in regulated industries such as finance, healthcare, and law. By analyzing OpenGuardrails' configurable policies, efficient model design and production-grade infrastructure, it reveals the development direction of the next-generation AI security guardrails.
-
CSO: A Chief Security Officer's Guide to Full-Link Security for Artificial Intelligence Data
Chief Security Officers (CSOs) are facing an unprecedented challenge: AI systems are both amplifying existing data risks and introducing entirely new threats such as data poisoning, model reverse engineering, and supply chain contamination. This guide builds on the NIST AI Risk Management Framework (AI RMF), the Google Secure AI Framework (SAIF), and industry practices to provide CSOs with an actionable data security governance system.
-
CVE-2025-68664 : Serialized Injection Vulnerability Report for LangChain, an Open Source Framework for Large Models
LangChain, a large open source model, has disclosed a severity-level serialization injection vulnerability (CVE-2025-68664), discovered by Yarden Porat, a security researcher at Cyata Security, in which the "lc" key is missing in the serialization/deserialization process. This vulnerability, discovered by Cyata Security security researcher Yarden Porat, is caused by a missing "lc" key in the serialization/deserialization process, which allows an attacker to leak environment variables, instantiate arbitrary objects, or even remotely execute code by means of prompt injection. The vulnerability affects all deployments of LangChain Core before version 0.3.81 and within the range of versions 1.0.0-1.2.5. Officials have released patch versions 1.2.5 and 0.3.81 on December 24th and tightened the default security policy simultaneously.
-
Artificial Intelligence Security Defense in Depth: Explanation of Google SAIF AI Security Framework
With the widespread penetration of Large Language Models (LLM) and Generative Artificial Intelligence (GenAI) in enterprise applications, the traditional software security paradigm based on deterministic logic is struggling to cope with new stochastic threats such as model inversion, data poisoning, and cue word injection.Google's Secure AI Framework (SAIF), to be launched in 2023, proposes a systematic defense architecture that aims to combine traditional Cybersecurity best practices with the specificities of Artificial Intelligence (AI) systems. The Secure AI Framework (SAIF), launched by Google in 2023, proposes a systematic defense architecture that aims to combine the best practices of traditional cybersecurity with the specificities of AI systems. In this paper, we will analyze the six core pillars, ecological synergy mechanism and evolution path of SAIF from the perspective of architectural design, providing theoretical and practical references for the construction of enterprise-level AI security system.
-
Artificial Intelligence (AI) Big Model Security Risks and Defense In-Depth Report
This report is based on the five core attack surfaces consisting of AI AI critical links from AI Assistants, Agents, Tools, Models, and Storage, with targeted security risks, defense architectures, and solutions.