intelligence gathering

  • Nginx UI Authentication Bypass Vulnerability (CVE-2026-33032 / MCPwn)

    The core of the vulnerability is due to a logical error in route registration: the /mcp endpoint is protected by the AuthRequired() middleware, but its paired /mcp_message endpoint, which is used to receive instructions for the actual tool call, is deployed without that authentication middleware. This allows any attacker with network access to this UI to take over the Nginx service without any credentials.

    April 19, 2026
    01.1K0
  • AI Security Transformation: an analysis of the Claude Code Security release and its impact on the cybersecurity industry

    Big model technology is evolving from generative AI to an intelligent body with deep reasoning capability, pushing cybersecurity from rule-driven to AI-native mode. solutions such as Claude Code Security realize intelligent discovery and closed-loop remediation of vulnerabilities through architectural mapping and data-flow tracking, reshaping the security of the software supply chain and triggering a drastic change in the pattern of the traditional security market.
    Key points include:
    1. From generation to reasoning: the large model has evolved from text completion to an intelligent body with code understanding and task planning capabilities, supporting complex logical analysis and autonomous decision-making.
    2. Security paradigm shift: the big model in vulnerability detection, threat intelligence, code repair and other aspects beyond the traditional rules system, to achieve from “auxiliary tools” to “defense core” role upgrade.
    3. Claude Code Security mechanism: This solution provides real-time, intelligent code security analysis integrated into the development process, relying on three major capabilities: architecture mapping, data flow tracking and closed-loop repair.
    4. Industry landscape impact: AI-native security solutions have led to a decline in the share price of traditional security companies, driving toolchain consolidation, lowering the defense threshold, and compressing the vulnerability exploitation window.
    5. Technical limitations exist: model illusions can lead to false positives and inference accuracy in highly customized or closed architectures remains challenging and requires continuous validation and optimization.

    February 21, 2026
    04.0K0
  • Big Model Security: Claude Desktop Extension Zero-Click Remote Code Execution Vulnerability

    Claude Desktop Extension leads to a zero-click remote code execution vulnerability based on indirect prompt injection due to a sandbox-less architecture and granting full system privileges to the AI agent. The vulnerability exploits a design flaw in the MCP protocol that lacks trust boundaries, allowing an attacker to achieve arbitrary code execution by contaminating external data sources. Despite the highest risk rating, the vendor refused to fix the vulnerability on the grounds that it was "outside of the threat model," sparking widespread controversy over the division of security responsibilities in the age of AI. This case highlights the fundamental security risks of AI agent systems in terms of privilege control and input validation.
    Key points include:
    1. High-privilege sandbox-less architecture: Claude DXT runs as a local MCP server, detached from the browser sandbox, inheriting all system privileges from the user, creating a high-risk attack surface.
    2. Zero-hit indirect prompt injection: The attacker embedded malicious commands in legitimate data sources such as Google Calendar, inducing the AI agent to obtain and mistakenly execute them on its own, with no user interaction required.
    3. MCP Protocol Trust Boundary Failure: The Model Context Protocol allows the output of low-risk operations to directly trigger high-risk system calls, resulting in an "obfuscated agent" vulnerability that makes the AI a springboard for attack.

    February 13, 2026
    03.0K0
  • AI BOT: An In-Depth Analysis of the AI Technology-Driven Automation Threat Landscape

    This article is based on the Imperva 2025 Malicious Robots Report, which reveals three core trends:
    The New Normal of Automated Traffic: Automated traffic surpassed human traffic for the first time in 2024, accounting for 511 TP3T, of which 371 TP3T were malicious bots, and growing for six consecutive years, marking a structural change in Internet interaction patterns and a new stage in enterprise security challenges.
    AI-enabled attack evolution: the proliferation of Artificial Intelligence (AI) and Large Language Models (LLMs) has significantly lowered the attack threshold, fueling the scale and sophistication of malicious automated attacks. ai is not only used to generate bots, but it also drives them to analyze, learn, and optimize escape techniques, spawning advanced bots that are more evasive, and leading to an increase in business logic attacks.
    APIs Become New Focus of Attacks: With the popularity of microservices and mobile apps, APIs have become a prime target for malicious bots due to their concentrated value, relatively weak defenses, and ease of automation.44% of advanced bot traffic was directed to APIs, and the financial services and telecom industries were the most severely attacked, with data scraping, payment fraud, and account takeover being the main attack tactics.
    In addition, the article analyzes in detail the resurgence of account takeover (ATO) attacks, noting their year-over-year growth of 40% in 2024, and explores the drivers of the surge in ATO attacks, the most impacted industries, and the regulatory penalties they may face. Finally, the paper proposes a multi-layered, adaptive defense-in-depth strategy, including going beyond traditional WAFs, strengthening API security, countering ATOs, building a unified security view, and continuous monitoring and threat intelligence, which is designed to help enterprises effectively counter the increasingly intelligent and scaled threat of malicious bots and protect digital assets and business continuity.

    February 10, 2026
    03.1K0
  • OpenClaw Integrates with VirusTotal Engine to Increase Detection of Malicious ClawHub Skills

    With the rapid development of Artificial Intelligence (AI) technology, open source AI intelligences (Agents) represented by OpenClaw are reshaping human-computer interaction and task automation in an unprecedented way. However, its powerful features and open ecosystem also bring serious security challenges. This paper provides an in-depth look at the security risks faced by OpenClaw's architecture, functionality, and its ecosystem (especially the ClawHub skill marketplace), and analyzes in detail its solution for integrating the VirusTotal scanning engine to detect and mitigate malicious skill threats. The article aims to provide researchers and practitioners in the field of AI security with a case study on the security governance of the intelligentsia ecosystem, as well as reflections on the future direction of AI supply chain security.

    February 8, 2026
    04.3K0
  • AI Supply Chain Security: Deep Analysis Report on the Attack Surfaces of About 175,000 Global Ollama Framework Instances

    With the popularity of large models (LLMs), open-source localized deployment frameworks, represented by Ollama, have dramatically lowered the threshold for developers to use and manage AI models. However, this convenience has also spawned new, large-scale security risks. A recent study jointly published by SentinelOne, Censys, and Pillar Security reveals the startling fact that there are more than 175,000 publicly exposed instances of Ollama on the Internet globally, creating a massive AI computing infrastructure security attack surface risk
    This report aims to analyze the technical aspects of this incident. This report aims to provide an in-depth technical analysis of this incident, analyze its attack surface, realistic threats, systemic risks, and propose corresponding enterprise-level security hardening and governance strategies.

    January 31, 2026
    03.7K0
  • AI open source framework: Chainlit AI framework ChainLeak vulnerability portfolio impact analysis

    ChainLeak, a high-risk security vulnerability in the Chainlit framework, including the principle of arbitrary file reading and SSRF vulnerability, attack demonstration, and protection recommendations for AI security practitioners and enterprise security teams.

    January 21, 2026
    13.7K0
  • Global Cyber Attack Landscape and AI Security Threat Report 2025

    The year 2025 is a year of "unprecedented complexity" in the field of cybersecurity. With the rapid development and large-scale application of artificial intelligence (AI) technology, cyber threats will present unprecedented complexity and scale. This report analyzes the new posture of global cyberattacks, typical security incidents, AI security threats, and corresponding risk management strategies in 2025, providing technical references and decision-making basis for AI engineers, security engineers, and chief security officers (CSOs).

    January 9, 2026
    06.2K0
  • AI IDE Security: Cursor Windsurf Google Antigravity Supply Chain Attack Analysis

    AI development-driven IDEs such as Cursor, Windsurf and Google Antigravity are at risk of supply chain attacks due to configuration file flaws inherited from VSCode. The three platforms, which collectively have more than a million users, have an automated recommendation mechanism for extensions that could be exploited by an attacker to push malicious code to developers by polluting the OpenVSX extension marketplace. The vulnerability allows an attacker to register undeclared extension namespaces and upload malicious extensions to gain SSH keys, AWS credentials, and source code access without traditional social engineering. The risk's impact surface highlights an emerging attack vector in the developer toolchain and marks the formal inclusion of IDE extensions in the MITRE ATT&CK framework.

    January 7, 2026
    03.5K0
  • OWASP Release: AI Intelligence Body Security OWASP Top 10 2026

    As AI evolves from mere "Chatbots" to "Agentic AI" with autonomous planning, decision-making and execution capabilities, the attack surface of applications has fundamentally changed. In contrast to traditional LLM ...

    December 22, 2025
    07.7K0