LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks

LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks

Cybersecurity researchers have disclosed three security vulnerabilities impacting LangChain and LangGraph that, if successfully exploited, could expose filesystem data, environment secrets, and conversation history.

Both LangChain and LangGraph are open-source frameworks that are used to build applications powered by Large Language Models (LLMs). LangGraph is built on the foundations of LangChain for more sophisticated and non-linear agentic workflows. According to statistics on the Python Package Index (PyPI), LangChain, LangChain-Core, and LangGraph have been downloaded more than 52 million, 23 million, and 9 million times last week alone.

Read more: thehackernews.com