Prompt Injection

  • AI Security:Artificial Intelligence AI Attack Surface Analysis Report 2026

    In 2026, the AI security landscape is undergoing a fundamental reshaping. In response to a global cybersecurity talent gap of up to 4.8 million, organizations are massively deploying high-privilege, 24/7 running AI intelligences are becoming targets for attackers. However, these autonomous systems are also quickly becoming a focal point for attackers.Top security organizations such as Palo Alto Networks, Moody's, and CrowdStrike predict that AI intelligences will be the biggest insider threat facing enterprises by 2026. Traditional defense frameworks are failing and new governance systems and protection architectures have become necessary.

    January 10, 2026
    02.4K0
  • AI Security: Building an Enterprise AI Security System Based on ATT&CK Methodology

    This paper takes the AI security threat matrix as the core framework, and based on the mature ATT&CK methodology, it systematically elaborates on the full lifecycle security threats faced by AI systems, including key attack techniques such as data poisoning, model extraction, privacy leakage, confrontation samples, and cue word injection, etc., and puts forward the corresponding defense strategies and enterprise landing solutions, providing AI engineers, security engineers, and CSOs with professional technical Reference.

    January 9, 2026
    01.4K0
  • AI Intelligence Body Security: GitHub Actions Prompt Word Injection (PromptPwnd) Vulnerability

    PromptPwnd is a new type of vulnerability discovered by the Aikido Security research team that poses a serious threat to GitHub Actions and GitLab CI/CD pipelines that integrate AI agents. The vulnerability utilizes Prompt Injection to cause key compromise, workflow manipulation, and supply chain compromise by injecting malicious commands into an AI model, causing it to perform high-privilege operations. At least five Fortune 500 companies have been affected, and several high-profile projects such as the Google Gemini CLI have been verified to have the vulnerability.

    December 27, 2025
    01.4K0
  • Artificial Intelligence (AI) Big Model Security Risks and Defense In-Depth Report

    This report is based on the five core attack surfaces consisting of AI AI critical links from AI Assistants, Agents, Tools, Models, and Storage, with targeted security risks, defense architectures, and solutions.

    November 29, 2025
    010.1K0