With the following crawler configuration:
```python
from bs4 import BeautifulSoup as Soup
url "https://example.com"
loader RecursiveUrlLoader(
urlurl, max_depth2, extractorlambda x: Soup(x, "html.parser").text
)
docs loader.load()
```
An attacker in control of the contents of `https://example.com` could place a malicious HTML file in there with links like "https://example.completely.different/my_file.html" and the crawler would proceed to download that file as well even though `prevent_outsideTrue`.
https://github.com/langchain-ai/langchain/blob/bf0b3cc0b5ade1fb95a5b1b6fa260e99064c2e22/libs/community/langchain_community/document_loaders/recursive_url_loader.py#L51-L51
Resolved in https://github.com/langchain-ai/langchain/pull/15559
Publication date: Mon, 26 Feb 2024 22:27:00 +0000