AI security

  • 大模型安全:开源框架Guardrails安全护栏介绍与解析

    OpenGuardrails是首个完整开源的企业级大模型安全护栏平台,支持119种语言、统一LLM架构、可配置敏感度策略、多云部署。本报告深度解析其核心技术创新、应用场景、部署模式、性能对标与未来发展,为金融、医疗、法律等受管制行业的AI应用提供安全合规指引。通过分析OpenGuardrails的可配置策略、高效模型设计与生产级基础设施,揭示下一代AI安全护栏的发展方向。

    2026年1月6日
    06600
  • CSO: A Chief Security Officer's Guide to Full-Link Security for Artificial Intelligence Data

    Chief Security Officers (CSOs) are facing an unprecedented challenge: AI systems are both amplifying existing data risks and introducing entirely new threats such as data poisoning, model reverse engineering, and supply chain contamination. This guide builds on the NIST AI Risk Management Framework (AI RMF), the Google Secure AI Framework (SAIF), and industry practices to provide CSOs with an actionable data security governance system.

    December 31, 2025
    01.3K0
  • AI security architecture: from AI capabilities to security platform landing practice

    Future-oriented AI security architecture is not only a technical issue, but also a strategic shift. From "tool-driven" to "intelligence-driven", from "after-the-fact response" to "before-the-fact governance", from "artificial dependence" to "human-machine collaboration" - these shifts will profoundly change the face of the security industry. From "artificial dependence" to "human-machine collaboration" - these changes will profoundly change the appearance of the security industry.

    Those enterprises that take the lead in building AI-native security systems will gain a competitive advantage in multiple dimensions such as threat detection, operational efficiency, cost control, and talent retention. And those enterprises that are stuck in traditional tool stacking and rule writing will eventually be eliminated by the times.

    The development of AI is irreversible. Security decision makers should take immediate action to seize this historic opportunity by launching the construction of AI security platforms in four dimensions: strategy, organization, technology and investment.

    December 30, 2025
    05.8K0
  • AI Hacking: Automated Infiltration Analysis of AI Agents

    Strix represents a paradigm shift in the field of cybersecurity testing - an evolution from a manual-centric penetration approach to a multi-agent collaborative automation model. The tool realizes complete vulnerability lifecycle management (reconnaissance, exploitation, validation) through LLM-driven autonomous intelligences, demonstrating significant cost advantages (cost reduction of 70% or more) and time efficiency advantages (test cycle shortened from weeks to hours) over traditional manual penetration and passive scanning tools. However, its limitations are equally obvious: the success rate of zero-day vulnerability exploitation is only 10-12%, the detection capability of business logic vulnerability is seriously insufficient, and the inherent security risks of multi-agent systems (hint injection, inter-agent trust abuse) require a structured governance framework.

    December 24, 2025
    01.9K0
  • AI Security:Artificial Intelligence (AI) Attack Surface Expansion and Security Governance

    Many people think that AI's impact on cybersecurity is mainly in the form of "one more smarter tool". But after reading this compendium on AI cybersecurity in Asia-Pacific (AP), a more solid conclusion is that AI is making attacks faster, cheaper, and more realistic, while...

    December 24, 2025
    01.4K0
  • OWASP Release: AI Intelligence Body Security OWASP Top 10 2026

    As AI evolves from mere "Chatbots" to "Agentic AI" with autonomous planning, decision-making and execution capabilities, the attack surface of applications has fundamentally changed. In contrast to traditional LLM ...

    December 22, 2025
    02.9K0
  • Artificial Intelligence Security Defense in Depth: Explanation of Google SAIF AI Security Framework

    With the widespread penetration of Large Language Models (LLM) and Generative Artificial Intelligence (GenAI) in enterprise applications, the traditional software security paradigm based on deterministic logic is struggling to cope with new stochastic threats such as model inversion, data poisoning, and cue word injection.Google's Secure AI Framework (SAIF), to be launched in 2023, proposes a systematic defense architecture that aims to combine traditional Cybersecurity best practices with the specificities of Artificial Intelligence (AI) systems. The Secure AI Framework (SAIF), launched by Google in 2023, proposes a systematic defense architecture that aims to combine the best practices of traditional cybersecurity with the specificities of AI systems. In this paper, we will analyze the six core pillars, ecological synergy mechanism and evolution path of SAIF from the perspective of architectural design, providing theoretical and practical references for the construction of enterprise-level AI security system.

    December 20, 2025
    01.8K0
  • Artificial Intelligence (AI) Big Model Security Risks and Defense In-Depth Report

    This report is based on the five core attack surfaces consisting of AI AI critical links from AI Assistants, Agents, Tools, Models, and Storage, with targeted security risks, defense architectures, and solutions.

    November 29, 2025
    09.6K0
  • Healthcare Industry Cybersecurity Analysis Report 2024

    2024 Healthcare faces evolving cybersecurity threats, especially as small healthcare providers and connected technologies become new targets for attacks. Data breaches are widespread and costly. Advances in Artificial Intelligence (AI) and Machine Learning (ML) technologies provide new tools for detecting and predicting cyberthreats, while zero-trust security frameworks and blockchain technologies represent advances in defenses. The regulatory environment continues to evolve, posing new compliance challenges for healthcare organizations, particularly in the areas of telemedicine and third-party vendor risk management. The case studies highlight the importance of adopting a proactive strategy in terms of staff training, technology deployment, and compliance. Going forward, the healthcare industry will need to maintain vigilance and adaptability to cybersecurity threats to ensure secure, continuous care for patients.

    February 10, 2024
    013.9K0