Artificial Intelligence (AI) security
-
AI 安全:Cursor IDE 企业级安全开发指南
Cursor 是一款基于开源项目 Visual Studio Code(VS Code)的 AI 驱动集成开发环境,深度集成生成式大语言模型(如 GPT-4、Claude),为开发人员提供智能代码生成、自动补全、错误修复等功能。其核心特性包括 Cursor Tab(智能代码补全)、Agent Mode(自主代码生成)和 Model Context Protocol(MCP)集成等。
-
CSO:2025年中国网络安全从合规到AI驱动风险治理趋势
一、报告摘要 核心观点 2025年中国网络安全领域正经历范式转变。监管强化与生成式AI爆发式采纳的”双重压力”,正将企业安全焦点从被动合规防御转向主动数据治…
-
Data Security Intelligence Body: AI-driven paradigm for next-generation enterprise data security protection
With the rapid evolution of Large Language Model (LLM) technology and the deepening of enterprise digital transformation, the traditional passive data security protection system has been difficult to meet the defense needs of modern threats. The first data security intelligence in China realizes the paradigm shift from "artificial stacking" to "intelligent initiative" by integrating generative AI, adaptive protection mechanism, multi-intelligence collaboration and other cutting-edge technologies.
-
AI Security:Artificial Intelligence AI Attack Surface Analysis Report 2026
In 2026, the AI security landscape is undergoing a fundamental reshaping. In response to a global cybersecurity talent gap of up to 4.8 million, organizations are massively deploying high-privilege, 24/7 running AI intelligences are becoming targets for attackers. However, these autonomous systems are also quickly becoming a focal point for attackers.Top security organizations such as Palo Alto Networks, Moody's, and CrowdStrike predict that AI intelligences will be the biggest insider threat facing enterprises by 2026. Traditional defense frameworks are failing and new governance systems and protection architectures have become necessary.
-
Global Cyber Attack Landscape and AI Security Threat Report 2025
The year 2025 is a year of "unprecedented complexity" in the field of cybersecurity. With the rapid development and large-scale application of artificial intelligence (AI) technology, cyber threats will present unprecedented complexity and scale. This report analyzes the new posture of global cyberattacks, typical security incidents, AI security threats, and corresponding risk management strategies in 2025, providing technical references and decision-making basis for AI engineers, security engineers, and chief security officers (CSOs).
-
AI Security: Building an Enterprise AI Security System Based on ATT&CK Methodology
This paper takes the AI security threat matrix as the core framework, and based on the mature ATT&CK methodology, it systematically elaborates on the full lifecycle security threats faced by AI systems, including key attack techniques such as data poisoning, model extraction, privacy leakage, confrontation samples, and cue word injection, etc., and puts forward the corresponding defense strategies and enterprise landing solutions, providing AI engineers, security engineers, and CSOs with professional technical Reference.
-
AI IDE Security: Cursor Windsurf Google Antigravity Supply Chain Attack Analysis
AI development-driven IDEs such as Cursor, Windsurf and Google Antigravity are at risk of supply chain attacks due to configuration file flaws inherited from VSCode. The three platforms, which collectively have more than a million users, have an automated recommendation mechanism for extensions that could be exploited by an attacker to push malicious code to developers by polluting the OpenVSX extension marketplace. The vulnerability allows an attacker to register undeclared extension namespaces and upload malicious extensions to gain SSH keys, AWS credentials, and source code access without traditional social engineering. The risk's impact surface highlights an emerging attack vector in the developer toolchain and marks the formal inclusion of IDE extensions in the MITRE ATT&CK framework.
-
Large model security: open source framework Guardrails security fence introduction and analysis
OpenGuardrails is the first complete open source enterprise-grade large model security guardrail platform, supporting 119 languages, unified LLM architecture, configurable sensitivity policies, and multi-cloud deployment. This report deeply analyzes its core technology innovation, application scenarios, deployment models, performance benchmarking and future development, providing security compliance guidelines for AI applications in regulated industries such as finance, healthcare, and law. By analyzing OpenGuardrails' configurable policies, efficient model design and production-grade infrastructure, it reveals the development direction of the next-generation AI security guardrails.
-
CSO:2025 Artificial Intelligence (AI) Cyber Attack and Defense Statistics, Trends, Costs, and Defense Security Report
Artificial Intelligence is changing the defense and offense paradigm in security. Attackers use AI to generate realistic phishing messages at scale, clone executive voices, detect exposed AI infrastructure and automate intrusion penetration. Defenders, on the other hand, use AI to detect anomalies faster, categorize risk alerts and contain incidents. However, skills gaps and misconfigured AI architectures open the door to new attacks. This guide summarizes the latest AI cyberattack statistics for 2025, translates the data into business impact, and provides a prioritized course of action you can implement this year.
-
CSO: A Chief Security Officer's Guide to Full-Link Security for Artificial Intelligence Data
Chief Security Officers (CSOs) are facing an unprecedented challenge: AI systems are both amplifying existing data risks and introducing entirely new threats such as data poisoning, model reverse engineering, and supply chain contamination. This guide builds on the NIST AI Risk Management Framework (AI RMF), the Google Secure AI Framework (SAIF), and industry practices to provide CSOs with an actionable data security governance system.