Large model security
-
大模型安全:开源框架Guardrails安全护栏介绍与解析
OpenGuardrails是首个完整开源的企业级大模型安全护栏平台,支持119种语言、统一LLM架构、可配置敏感度策略、多云部署。本报告深度解析其核心技术创新、应用场景、部署模式、性能对标与未来发展,为金融、医疗、法律等受管制行业的AI应用提供安全合规指引。通过分析OpenGuardrails的可配置策略、高效模型设计与生产级基础设施,揭示下一代AI安全护栏的发展方向。
-
CSO: A Chief Security Officer's Guide to Full-Link Security for Artificial Intelligence Data
Chief Security Officers (CSOs) are facing an unprecedented challenge: AI systems are both amplifying existing data risks and introducing entirely new threats such as data poisoning, model reverse engineering, and supply chain contamination. This guide builds on the NIST AI Risk Management Framework (AI RMF), the Google Secure AI Framework (SAIF), and industry practices to provide CSOs with an actionable data security governance system.
-
CVE-2025-68664 : Serialized Injection Vulnerability Report for LangChain, an Open Source Framework for Large Models
LangChain, a large open source model, has disclosed a severity-level serialization injection vulnerability (CVE-2025-68664), discovered by Yarden Porat, a security researcher at Cyata Security, in which the "lc" key is missing in the serialization/deserialization process. This vulnerability, discovered by Cyata Security security researcher Yarden Porat, is caused by a missing "lc" key in the serialization/deserialization process, which allows an attacker to leak environment variables, instantiate arbitrary objects, or even remotely execute code by means of prompt injection. The vulnerability affects all deployments of LangChain Core before version 0.3.81 and within the range of versions 1.0.0-1.2.5. Officials have released patch versions 1.2.5 and 0.3.81 on December 24th and tightened the default security policy simultaneously.
-
Artificial Intelligence Security Defense in Depth: Explanation of Google SAIF AI Security Framework
With the widespread penetration of Large Language Models (LLM) and Generative Artificial Intelligence (GenAI) in enterprise applications, the traditional software security paradigm based on deterministic logic is struggling to cope with new stochastic threats such as model inversion, data poisoning, and cue word injection.Google's Secure AI Framework (SAIF), to be launched in 2023, proposes a systematic defense architecture that aims to combine traditional Cybersecurity best practices with the specificities of Artificial Intelligence (AI) systems. The Secure AI Framework (SAIF), launched by Google in 2023, proposes a systematic defense architecture that aims to combine the best practices of traditional cybersecurity with the specificities of AI systems. In this paper, we will analyze the six core pillars, ecological synergy mechanism and evolution path of SAIF from the perspective of architectural design, providing theoretical and practical references for the construction of enterprise-level AI security system.
-
Artificial Intelligence (AI) Big Model Security Risks and Defense In-Depth Report
This report is based on the five core attack surfaces consisting of AI AI critical links from AI Assistants, Agents, Tools, Models, and Storage, with targeted security risks, defense architectures, and solutions.