lyoncertified author
-
CVE-2025-68664 : Serialized Injection Vulnerability Report for LangChain, an Open Source Framework for Large Models
LangChain, a large open source model, has disclosed a severity-level serialization injection vulnerability (CVE-2025-68664), discovered by Yarden Porat, a security researcher at Cyata Security, in which the "lc" key is missing in the serialization/deserialization process. This vulnerability, discovered by Cyata Security security researcher Yarden Porat, is caused by a missing "lc" key in the serialization/deserialization process, which allows an attacker to leak environment variables, instantiate arbitrary objects, or even remotely execute code by means of prompt injection. The vulnerability affects all deployments of LangChain Core before version 0.3.81 and within the range of versions 1.0.0-1.2.5. Officials have released patch versions 1.2.5 and 0.3.81 on December 24th and tightened the default security policy simultaneously.
-
AI Hacking: Automated Infiltration Analysis of AI Agents
Strix represents a paradigm shift in the field of cybersecurity testing - an evolution from a manual-centric penetration approach to a multi-agent collaborative automation model. The tool realizes complete vulnerability lifecycle management (reconnaissance, exploitation, validation) through LLM-driven autonomous intelligences, demonstrating significant cost advantages (cost reduction of 70% or more) and time efficiency advantages (test cycle shortened from weeks to hours) over traditional manual penetration and passive scanning tools. However, its limitations are equally obvious: the success rate of zero-day vulnerability exploitation is only 10-12%, the detection capability of business logic vulnerability is seriously insufficient, and the inherent security risks of multi-agent systems (hint injection, inter-agent trust abuse) require a structured governance framework.
-
Artificial Intelligence Security Defense in Depth: Explanation of Google SAIF AI Security Framework
With the widespread penetration of Large Language Models (LLM) and Generative Artificial Intelligence (GenAI) in enterprise applications, the traditional software security paradigm based on deterministic logic is struggling to cope with new stochastic threats such as model inversion, data poisoning, and cue word injection.Google's Secure AI Framework (SAIF), to be launched in 2023, proposes a systematic defense architecture that aims to combine traditional Cybersecurity best practices with the specificities of Artificial Intelligence (AI) systems. The Secure AI Framework (SAIF), launched by Google in 2023, proposes a systematic defense architecture that aims to combine the best practices of traditional cybersecurity with the specificities of AI systems. In this paper, we will analyze the six core pillars, ecological synergy mechanism and evolution path of SAIF from the perspective of architectural design, providing theoretical and practical references for the construction of enterprise-level AI security system.
-
Artificial Intelligence (AI) Big Model Security Risks and Defense In-Depth Report
This report is based on the five core attack surfaces consisting of AI AI critical links from AI Assistants, Agents, Tools, Models, and Storage, with targeted security risks, defense architectures, and solutions.
-
Apache Ofbiz xml-RPC remote code execution vulnerability (CVE-2023-49070)
Apache OFBiz is an open source product for enterprise process automation. It includes framework components and business applications for ERP, CRM, e-commerce, supply chain management and manufacturing resource planning. There is a remote code execution vulnerability in Apache OFBiz before version 18.12.10. Because xml-RPC is no longer maintained, an authenticated attacker can use xml-RPC to conduct remote code execution exploits and control the server.
-
Sixteen countries around the world jointly release guidelines for the development of safe artificial intelligence systems
Guidance for any system provider using artificial intelligence (AI), whether those systems are created from scratch or built on tools and services provided by others.
-
Google Android 14 input method information leakage vulnerability and impact
Google Android 14 input method information disclosure vulnerability, due to side channel information leakage, there is a possible way to determine whether an application is installed without querying permissions. This may lead to local information disclosure without requiring additional execution permissions. Exploitation of this vulnerability requires no user interaction.
-
Malicious AI tool FraudGPT is sold on the dark web, causing network security issues
With the rise of generative AI models, the threat landscape has changed dramatically. Now another hacker has created a malicious AI tool called FraudGPT, which is specifically used for attack purposes, such as making spear phishing emails, creating cracking tools, carding, etc. The tool is currently for sale on various darknet markets and Telegram platforms. It is said to be "capable of generating a variety of network attack codes" and "more than 3,000 buyers have placed orders in less than a week."
-
APT-C-23 hacker group targets Middle Eastern users with new Android spyware
A threat actor known for targeting targets in the Middle East has once again evolved its Android spyware and enhanced its capabilities to make it more stealthy and persistent, while concealing itself with seemingly innocuous app updates. Reports indicate that a new variant of the spyware has been…
-
Ukraine accuses Gamaredon cyber espionage group of ties to Russia's FSB
Ukraine's main law enforcement and counterintelligence agency on Thursday revealed the true identities of five people it said were involved in the hack, believed to be part of a cyberespionage group called Gamaredon, and linked the members to Russia's Federal Security Service. Ukrainian security…