• Alibaba Cloud Zero Trust Practice: Identity and Network Micro-Isolation in Production Networks Alibaba Cloud Zero Trust Practice: Identity and Network Micro-Isolation in Production Networks
  • New secure infrastructure: Alibaba data asset blueprint New secure infrastructure: Alibaba data asset blueprint

Topic introduction Chief Security Officer - Your think tank of security experts How to become a chief security officer

  • CSO:2025年中国网络安全从合规到AI驱动风险治理趋势

    一、报告摘要 核心观点 2025年中国网络安全领域正经历范式转变。监管强化与生成式AI爆发式采纳的”双重压力”,正将企业安全焦点从被动合规防御转向主动数据治…

    2026年1月18日
    01.1K0
  • Data Security Intelligence Body: AI-driven paradigm for next-generation enterprise data security protection

    With the rapid evolution of Large Language Model (LLM) technology and the deepening of enterprise digital transformation, the traditional passive data security protection system has been difficult to meet the defense needs of modern threats. The first data security intelligence in China realizes the paradigm shift from "artificial stacking" to "intelligent initiative" by integrating generative AI, adaptive protection mechanism, multi-intelligence collaboration and other cutting-edge technologies.

    January 13, 2026
    01.0K0
  • AI Security:Artificial Intelligence AI Attack Surface Analysis Report 2026

    In 2026, the AI security landscape is undergoing a fundamental reshaping. In response to a global cybersecurity talent gap of up to 4.8 million, organizations are massively deploying high-privilege, 24/7 running AI intelligences are becoming targets for attackers. However, these autonomous systems are also quickly becoming a focal point for attackers.Top security organizations such as Palo Alto Networks, Moody's, and CrowdStrike predict that AI intelligences will be the biggest insider threat facing enterprises by 2026. Traditional defense frameworks are failing and new governance systems and protection architectures have become necessary.

    January 10, 2026
    02.4K0
  • Global Cyber Attack Landscape and AI Security Threat Report 2025

    The year 2025 is a year of "unprecedented complexity" in the field of cybersecurity. With the rapid development and large-scale application of artificial intelligence (AI) technology, cyber threats will present unprecedented complexity and scale. This report analyzes the new posture of global cyberattacks, typical security incidents, AI security threats, and corresponding risk management strategies in 2025, providing technical references and decision-making basis for AI engineers, security engineers, and chief security officers (CSOs).

    January 9, 2026
    01.8K0
  • AI Security: Building an Enterprise AI Security System Based on ATT&CK Methodology

    This paper takes the AI security threat matrix as the core framework, and based on the mature ATT&CK methodology, it systematically elaborates on the full lifecycle security threats faced by AI systems, including key attack techniques such as data poisoning, model extraction, privacy leakage, confrontation samples, and cue word injection, etc., and puts forward the corresponding defense strategies and enterprise landing solutions, providing AI engineers, security engineers, and CSOs with professional technical Reference.

    January 9, 2026
    01.4K0
  • AI IDE Security: Cursor Windsurf Google Antigravity Supply Chain Attack Analysis

    AI development-driven IDEs such as Cursor, Windsurf and Google Antigravity are at risk of supply chain attacks due to configuration file flaws inherited from VSCode. The three platforms, which collectively have more than a million users, have an automated recommendation mechanism for extensions that could be exploited by an attacker to push malicious code to developers by polluting the OpenVSX extension marketplace. The vulnerability allows an attacker to register undeclared extension namespaces and upload malicious extensions to gain SSH keys, AWS credentials, and source code access without traditional social engineering. The risk's impact surface highlights an emerging attack vector in the developer toolchain and marks the formal inclusion of IDE extensions in the MITRE ATT&CK framework.

    January 7, 2026
    01.6K0
  • Large model security: open source framework Guardrails security fence introduction and analysis

    OpenGuardrails is the first complete open source enterprise-grade large model security guardrail platform, supporting 119 languages, unified LLM architecture, configurable sensitivity policies, and multi-cloud deployment. This report deeply analyzes its core technology innovation, application scenarios, deployment models, performance benchmarking and future development, providing security compliance guidelines for AI applications in regulated industries such as finance, healthcare, and law. By analyzing OpenGuardrails' configurable policies, efficient model design and production-grade infrastructure, it reveals the development direction of the next-generation AI security guardrails.

    January 6, 2026
    01.5K0
  • CSO:2025 Artificial Intelligence (AI) Cyber Attack and Defense Statistics, Trends, Costs, and Defense Security Report

    Artificial Intelligence is changing the defense and offense paradigm in security. Attackers use AI to generate realistic phishing messages at scale, clone executive voices, detect exposed AI infrastructure and automate intrusion penetration. Defenders, on the other hand, use AI to detect anomalies faster, categorize risk alerts and contain incidents. However, skills gaps and misconfigured AI architectures open the door to new attacks. This guide summarizes the latest AI cyberattack statistics for 2025, translates the data into business impact, and provides a prioritized course of action you can implement this year.

    January 4, 2026
    03.5K0
  • CSO: A Chief Security Officer's Guide to Full-Link Security for Artificial Intelligence Data

    Chief Security Officers (CSOs) are facing an unprecedented challenge: AI systems are both amplifying existing data risks and introducing entirely new threats such as data poisoning, model reverse engineering, and supply chain contamination. This guide builds on the NIST AI Risk Management Framework (AI RMF), the Google Secure AI Framework (SAIF), and industry practices to provide CSOs with an actionable data security governance system.

    December 31, 2025
    02.2K0
  • The MCP Governance Framework: How to build a next-generation security model that resists AI superpowers

    Focus on how MCP directly impacts the existing security system while empowering AI to actually "execute". On the one hand, MCP allows LLMs to access tools, databases, and business systems through a unified protocol, truly turning them into multi-agents that can cross systems rather than passive question-and-answer bots. On the other hand, this ability relies on "hybrid identity" and long-link authorization and authentication, so that the clear identity, minimal privileges and continuous verification required by zero trust are systematically weakened, and the context of poisoning, tool poisoning, supply chain attacks and other invisible threats are dramatically enlarged.
    Right now, governance must be rebuilt around MCP - with the gateway as the hub, unified identity, fine-grained authorization, and full-link auditing - in order to unlock the true value of agentic AI without sacrificing security.

    December 30, 2025
    01.6K0
Load more posts