Artificial Intelligence (AI) security

  • The MCP Governance Framework: How to build a next-generation security model that resists AI superpowers

    Focus on how MCP directly impacts the existing security system while empowering AI to actually "execute". On the one hand, MCP allows LLMs to access tools, databases, and business systems through a unified protocol, truly turning them into multi-agents that can cross systems rather than passive question-and-answer bots. On the other hand, this ability relies on "hybrid identity" and long-link authorization and authentication, so that the clear identity, minimal privileges and continuous verification required by zero trust are systematically weakened, and the context of poisoning, tool poisoning, supply chain attacks and other invisible threats are dramatically enlarged.
    Right now, governance must be rebuilt around MCP - with the gateway as the hub, unified identity, fine-grained authorization, and full-link auditing - in order to unlock the true value of agentic AI without sacrificing security.

    December 30, 2025
    02.3K0
  • AI security architecture: from AI capabilities to security platform landing practice

    Future-oriented AI security architecture is not only a technical issue, but also a strategic shift. From "tool-driven" to "intelligence-driven", from "after-the-fact response" to "before-the-fact governance", from "artificial dependence" to "human-machine collaboration" - these shifts will profoundly change the face of the security industry. From "artificial dependence" to "human-machine collaboration" - these changes will profoundly change the appearance of the security industry.

    Those enterprises that take the lead in building AI-native security systems will gain a competitive advantage in multiple dimensions such as threat detection, operational efficiency, cost control, and talent retention. And those enterprises that are stuck in traditional tool stacking and rule writing will eventually be eliminated by the times.

    The development of AI is irreversible. Security decision makers should take immediate action to seize this historic opportunity by launching the construction of AI security platforms in four dimensions: strategy, organization, technology and investment.

    December 30, 2025
    07.1K0
  • CVE-2025-68664 : Serialized Injection Vulnerability Report for LangChain, an Open Source Framework for Large Models

    LangChain, a large open source model, has disclosed a severity-level serialization injection vulnerability (CVE-2025-68664), discovered by Yarden Porat, a security researcher at Cyata Security, in which the "lc" key is missing in the serialization/deserialization process. This vulnerability, discovered by Cyata Security security researcher Yarden Porat, is caused by a missing "lc" key in the serialization/deserialization process, which allows an attacker to leak environment variables, instantiate arbitrary objects, or even remotely execute code by means of prompt injection. The vulnerability affects all deployments of LangChain Core before version 0.3.81 and within the range of versions 1.0.0-1.2.5. Officials have released patch versions 1.2.5 and 0.3.81 on December 24th and tightened the default security policy simultaneously.

    December 27, 2025
    02.9K0
  • AI Intelligence Body Security: GitHub Actions Prompt Word Injection (PromptPwnd) Vulnerability

    PromptPwnd is a new type of vulnerability discovered by the Aikido Security research team that poses a serious threat to GitHub Actions and GitLab CI/CD pipelines that integrate AI agents. The vulnerability utilizes Prompt Injection to cause key compromise, workflow manipulation, and supply chain compromise by injecting malicious commands into an AI model, causing it to perform high-privilege operations. At least five Fortune 500 companies have been affected, and several high-profile projects such as the Google Gemini CLI have been verified to have the vulnerability.

    December 27, 2025
    01.6K0
  • AI Hacking: Automated Infiltration Analysis of AI Agents

    Strix represents a paradigm shift in the field of cybersecurity testing - an evolution from a manual-centric penetration approach to a multi-agent collaborative automation model. The tool realizes complete vulnerability lifecycle management (reconnaissance, exploitation, validation) through LLM-driven autonomous intelligences, demonstrating significant cost advantages (cost reduction of 70% or more) and time efficiency advantages (test cycle shortened from weeks to hours) over traditional manual penetration and passive scanning tools. However, its limitations are equally obvious: the success rate of zero-day vulnerability exploitation is only 10-12%, the detection capability of business logic vulnerability is seriously insufficient, and the inherent security risks of multi-agent systems (hint injection, inter-agent trust abuse) require a structured governance framework.

    December 24, 2025
    02.8K0
  • AI Security:Artificial Intelligence (AI) Attack Surface Expansion and Security Governance

    Many people think that AI's impact on cybersecurity is mainly in the form of "one more smarter tool". But after reading this compendium on AI cybersecurity in Asia-Pacific (AP), a more solid conclusion is that AI is making attacks faster, cheaper, and more realistic, while...

    December 24, 2025
    01.9K0
  • OWASP Release: AI Intelligence Body Security OWASP Top 10 2026

    As AI evolves from mere "Chatbots" to "Agentic AI" with autonomous planning, decision-making and execution capabilities, the attack surface of applications has fundamentally changed. In contrast to traditional LLM ...

    December 22, 2025
    03.8K0
  • Artificial Intelligence Security Defense in Depth: Explanation of Google SAIF AI Security Framework

    With the widespread penetration of Large Language Models (LLM) and Generative Artificial Intelligence (GenAI) in enterprise applications, the traditional software security paradigm based on deterministic logic is struggling to cope with new stochastic threats such as model inversion, data poisoning, and cue word injection.Google's Secure AI Framework (SAIF), to be launched in 2023, proposes a systematic defense architecture that aims to combine traditional Cybersecurity best practices with the specificities of Artificial Intelligence (AI) systems. The Secure AI Framework (SAIF), launched by Google in 2023, proposes a systematic defense architecture that aims to combine the best practices of traditional cybersecurity with the specificities of AI systems. In this paper, we will analyze the six core pillars, ecological synergy mechanism and evolution path of SAIF from the perspective of architectural design, providing theoretical and practical references for the construction of enterprise-level AI security system.

    December 20, 2025
    02.4K0
  • CVE-2025-34291: Langflow AI Intelligence Body and Workflow Platform Account Takeover and Remote Code Execution Vulnerability

    CVE-2025-34291 is a critical vulnerability chain found in the Langflow AI Agent and Workflow Platform with a security score of CVSS v4.0: 9.4. The vulnerability allows an attacker to achieve full account takeover and remote code execution (RCE) of Langflow instances by inducing users to visit a malicious web page.

    December 11, 2025
    02.0K0
  • Artificial Intelligence (AI) Big Model Security Risks and Defense In-Depth Report

    This report is based on the five core attack surfaces consisting of AI AI critical links from AI Assistants, Agents, Tools, Models, and Storage, with targeted security risks, defense architectures, and solutions.

    November 29, 2025
    010.3K0