风险洞察:Shadow AI 引爆个人 LLM 账号数据泄露
最新云威胁报告显示,员工使用个人账户访问 ChatGPT、Google Gemini、Copilot 等 LLM 工具已成为企业数据泄露的主渠道之一,genAI 相关的数据策略违规平均每月高达 223 起,较去年翻倍增长。 报告指出,部分组织中发往生成式 AI 应用的提示数量在一年内增长六倍,头部 1% 企业每月提交提示超过 140 万条,其中包含源代码、合同文本、客户资料甚至凭据等高敏数据,一旦被用于模型训练或遭二次窃取,将给合规与知识产权带来长期不可逆风险。 这类 Shadow AI 的共性在于“看不见、管不住”:安全团队往往只盯住官方接入的大模型,而忽视浏览器、个人账号和移动端的灰色通道,未来还将叠加基于会话上下文的个性化广告与第三方插件生态,进一步模糊数据边界。 企业应尽快建立 genAI 使用策略与分类分级规范,借助 CASB/SASE 对 AI 流量进行识别与阻断,对高敏部门默认禁止个人 LLM 访问,并引入企业版受控 LLM 作为替代,配套审计与数据最小化策略。
ChatGPT 引入对话式广告:AI 盈利模式背后的数据安全与合规新挑战
OpenAI 宣布在 ChatGPT 免费版与 Go 版本中测试会话内广告,这一商业化调整同时把“广告合规与数据安全”推上台前。 官方强调不会向广告商出售对话数据、广告与回答逻辑隔离,但并未细化用于个性化投放的数据类型和处理路径,这正是安全与合规团队需要重点审视的灰区。 从技术路径看,聊天式广告需要把对话上下文、兴趣信号与用户画像做实时特征抽取,再通过推荐或排序模型选出“相关赞助内容”,这要求在日志采集、特征生成、模型训练和推送链路中建立最小化采集、脱敏与用途限定控制,否则极易演变为“隐性画像 + 越界再利用”。 对企业安全团队而言,一方面要将此类“对话式 AI 广告”纳入第三方服务风险评估,排查是否存在跨区域数据流转、监管红线(如未成年人、敏感场景广告屏蔽策略)与审计缺口;另一方面也要反向审视自身内部的 AI 助手和客服机器人,是否同样存在“借交互顺手做广告/画像”的冲动和暗箱逻辑。 可以预见,面向未来的大模型产品安全,将从单纯讨论“模型越权、提示注入”扩展到“人机对话中的隐性营销边界”,如何在可持续盈利、用户信任与监管要求之间设计透明可控的广告与数据治理机制,正成为下一轮 AI 安全实践的关键命题。
风险洞察:2026全球企业安全焦点转向AI漏洞
报告核心发现:
WEF(世界经济论坛)在2025年8-10月期间对全球800名企业从业者进行问卷调查,结果于2026年1月发布。调查显示,94%的高管认为AI是2026年网络安全格局中最显著的变革驱动力;87%的受访者将AI相关漏洞列为增长最快的网络安全风险。与2025年的调查相比,企业评估AI工具安全性的比例从37%剧增至64%。从攻防角度看,企业既担心攻击者利用AI加速攻击速度(72%关注),也在投资AI防御工具。
AI安全三大核心隐患:
CSO们具体识别的Top 3风险依次为:
(1)数据泄露与隐私暴露(30%)——AI模型训练数据被投毒或推理时敏感信息被提取;
(2)对手AI能力提升(28%)——恶意行为人利用AI生成钓鱼邮件、自适应恶意软件、虚假舆论;
(3)AI系统技术安全性(15%)——模型后门、权限混淆、提示注入等AI特有漏洞。
与此同时,73%企业已从2025年的”勒索软件防御优先”转向2026年的”AI驱动欺诈与网络钓鱼防御”。
AI智能体权限爆炸问题:
CyberArk与其他安全厂商报告指出一个关键趋势:非人类身份(machine identities)即将成为第一大云侵害向量。到2026年,每个AI智能体都是一个”身份”——需要数据库凭证、云服务token、代码仓库密钥等。随着组织部署数十乃至数百个AI智能体,这些身份累积的权限呈指数增长,成为攻击者目标。OWASP新增的”工具误用”(tool misuse)攻击向量尤其危险:攻击者在不修改AI指令前缀(system prompt)的前提下,通过恶意数据注入(如订单地址字段中的提示注入),诱导AI执行非预期的API调用、权限提升或数据窃取操作。
前瞻性应对策略:
实施AI身份与访问治理(IAM):给每个AI智能体分配最小必需权限,定期审计其凭证与API调用日志
部署表达式与提示防护:在AI代理的输入验证层增加指令注入检测,对不信任的外部数据源进行隔离处理
建立AI供应链信任体系:审查第三方AI模型、插件与数据源的安全来源,防止后门模型被部署
扩展AI感知SIEM:传统日志分析已难以应对AI的高度自主性,需要专用的AI行为异常检测
组建AI安全应急响应团队:因为传统的网络安全团队缺乏AI特定威胁的应急响应经验
趋势洞察:
2026年将是从”AI赋能安全防御”向”AI安全治理体系化”的转折点。不再是简单地”用AI打击恶意AI”,而是要在身份管理、权限治理、审计日志、应急响应等全流程中融入AI风险意识。那些仍停留在”AI好处论”而忽视权限管理的企业,将面临最大的代价。
AI 网络攻击成新趋势:2025 Q4 攻击样本预测试 2026 AI安全风险
Security reports show that in the fourth quarter of 2025 there were multiple cases of cyberattacks utilizing autonomous AI Agents, with attackers significantly expanding their attack surface by automating intelligence gathering, lateral movement, and elevation of privilege through the intelligences. Some analyses point out that some national-level threat actors have already used AI Agents to execute the steps of the 80%-90% attack chain in actual combat, with speed and covertness exceeding that of traditional human hacker teams. Experts predict that, as big models and automation frameworks sink further, "autonomous AI attacks" could evolve into a new mainstream threat in 2026 that is more destructive than traditional ransomware and spear phishing, especially targeting critical infrastructure and cloud environments.
AI Fraud and Data Breaches Will Surge in 2026
AI will be one of the core threats to cybersecurity in 2026, with more than 8,000 data breaches and around 345 million records exposed globally in the first half of 2025 alone, according to Experian's latest forecast. Meanwhile, Experian and Fortune report that AI-driven fraud will continue to skyrocket in 2026, with losses already reaching about $12.5 billion in the previous year, and deep counterfeiting and smart phishing proliferating rapidly on financial, e-commerce and social platforms. According to the reports, AI tools are "democratizing" fraud capabilities, making it possible for low-skill attackers to batch-generate highly realistic text messages, voices and synthetic videos, making it difficult for traditional anti-fraud rules to recognize these new attack patterns in time.
Global Cybersecurity Outlook 2026: AI Has Become the Biggest Risk for Cybersecurity Attack Growth
The Global Cybersecurity Outlook 2026 report states that 87% of organizations surveyed identified AI-related vulnerabilities as the fastest-growing cyber risk since 2025, and that AI is strengthening both the offensive and defensive ends of the spectrum. According to the report, 77% of organizations have adopted AI in their security operations for phishing detection, anomalous intrusion response and user behavior analysis, but data breaches and model misuse emerged as one of the top concerns for executives. The percentage of organizations proactively evaluating the security of AI tools has increased from 37% to 64% compared to 2025, indicating a shift from "blindly embracing AI" to "prioritizing security governance".
BreachForums Dark Web Forums History Database Major Leak
Since 2022, BreachForums has been one of the world's largest and most notorious forums for data breaches and hacking deals. This underground data bazaar is not only a stage for hackers to show off their war chests, but it is also the curation point for many major data breaches and ransom campaigns.BreachForums was originally founded by Conor Fitzpatrick (ID "pompompurin"), and was taken over in 2023 after his arrest by ShinyHunters took over operations. It was later taken offline for MyBB 0day, and some notes were released at the time, the authenticity of which is unknown. In June of this year, France and the United States worked together to arrest several more core members, including ShinyHunters, Hollow, Noct and Depressed.
Core details of the incident
Leak source: allegedly from a member within the original BreachForums.
Leaked content: a zip file named breachforum.7z containing:
Full SQL database file: contains core data such as user registration information, credentials, etc.
User PGP key: may affect the security of encrypted communications.
Statement document: a long, stylized, "poetic" text (.txt), the content of which has been identified as possibly having AI embellishments, or as a leaker's statement.
Authenticity of data: The existing users have verified that the data is authentic and recent by verifying the temporary email address they have used in the document.
Downloaded from: https://shinyhunte[...] rs/breachforum.7z (Note: links have been rendered innocuous for security reasons, please do not access them directly).
Leakage data analysis (email domain ranking)
The statistical ranking of registered email addresses in the leaked data is as follows, clearly reflecting the preferences of the forum's user base, with a very high percentage of private and temporary email services:
Rank Mailbox Domain Name Number of Occurrences Service Type/Characteristics
1 gmail.com 239,747 Mainstream commercial mailboxes
2 proton.me 29,851 End-to-end encrypted privacy mailboxes
3 protonmail.com 12,382 End-to-end encrypted privacy mailboxes
4 onionmail.org 4,668 Anonymous encrypted mailboxes specializing in the Tor network
5 cock.li 4,577 Email hosting service emphasizing anonymity without personal verification
6 yahoo.com 4,478 Mainstream Business Email
7 qq.com 3,290 Mainstream commercial mailboxes
8 mozmail.com 2,395 Privacy forwarding mailbox provided by Firefox Relay
9 tutanota.com / tutamail.com 2,294 End-to-end encrypted privacy mailboxes
10 dnmx.org 1,441 Anonymous mail service
Data Analysis Interpretation:
High concentration of privacy services: more than half of the top 10 domains (Proton, OnionMail, Cock.li, Mozilla Relay, Tuta) are privacy-protecting services focused on anonymization, encryption or forwarding. This shows that BreachForums users are extremely anti-retroactivity and privacy-conscious.
"Room to Operate" Warning: When users refer to "high room to operate", they may be referring to the fact that an attacker can exploit the registration mechanism of these private mailboxes (e.g., without requiring cell phone number verification) to conduct correlation analysis, phishing, or launching targeted attacks against users of specific privacy services.
AI Agent Becomes Major Attack Vector for Hackers in 2026
Cybersecurity experts have issued a stark warning predicting that AI Agents will be a central target for hackers in 2026. According to research by Palo Alto Networks security professionals, the cybersecurity skills gap, which currently stands at 4.8 million people, will drive large-scale deployments of AI Agents in organizations, prompting attackers to shift the focus of their attacks from human operators to the AI Agents themselves.
Key Risk Points:
Continuous Online Vulnerability: AI Agents run 24/7 and are at risk of being exploited at any time, and international hackers can attack U.S. companies regardless of time zones.
Insider Threat Amplification: Compromised AI Agents May Gain Elevated Access to Critical APIs, Customer Information, and Cybersecurity Infrastructure
Lack of governance tools: the report emphasizes the need for a new "non-negotiable category of AI governance tools", including security agents and emergency cut-off switches.
Experts predict that this will be "the dividing line between the success and failure of smart-body AI."
Hackers Massively Attack AI Infrastructure: 91,000+ Malicious Sessions Exposed
Security researchers have documented a surge in coordinated attacks against AI infrastructure, with more than 91,000 malicious sessions logged between October 2025 and January 2026 The analysis reveals two distinct threat campaigns that systematically exploit the expanded attack surface of AI deployments.
Attack event details:
Campaign #1: Targeting Ollama model pull functionality and Twilio SMS webhook integration, attackers injected malicious registry URLs that generated 1,688 sessions in 48 hours over the Christmas period
Campaign 2: Launched on December 28, 2025, two IP addresses launched 80,469 sessions over 11 days to 73+ large language model endpoints to systematically scout for misconfigured proxies
Objective scope: to test OpenAI-compatible and Google Gemini API formats, covering the major model families GPT-4o, Claude, Llama, DeepSeek, Gemini, Mistral, Qwen and Grok
Traditional Cybersecurity Can't Protect AI Systems, Study Confirms
New research published by Harvard Business Review on January 8 states that traditional cybersecurity defenses fail to protect generative AI systems. The study combines survey data, executive interviews and lab analysis to reveal systemic security gaps.
Core Findings:
Supply chain vulnerability: there are structural risks in the AI supply chain, including the possibility of malicious implantation of hardware/software components at various points in the chain
Misaligned defenses: traditional defenses designed for rules-based software cannot protect generative AI systems that learn and adapt from data
Severe Talent Shortage: Extreme Lack of AI Security Professionals Exacerbates Enterprise Defense Dilemma
The study's conclusions are clear: Leaders must move beyond application tinkering to fortifying the infrastructure and supply chain on which AI depends, while using AI itself as a front-line defense to ensure resilience.