Ollama is one of the most popular open-source projects for running AI Models, with over 70k stars on GitHub and hundreds of thousands of monthly pulls on Docker Hub.
Inspired by Docker, Ollama aims to simplify the process of packaging and deploying AI models.
Ollama users are encouraged to upgrade their Ollama installation to version 0.1.34 or newer.
Our research indicates that, as of June 10, there are a large number of Ollama instances running a vulnerable version that are exposed to the internet.
Over the past year, multiple remote code execution vulnerabilities were identified in inference servers, including TorchServe, Ray Anyscale, and Ollama.
Despite this, when scanning the internet for exposed Ollama servers, our scan revealed over 1,000 exposed instances hosting numerous AI models, including private models not listed in the Ollama public repository, highlighting a significant security gap.
To exploit this vulnerability, an attacker must send specially crafted HTTP requests to the Ollama API server.
Being one of the most popular open-source projects for running AI Models with over 70k stars on GitHub and hundreds of thousands of monthly pulls on Docker Hub, Ollama seemed to be the simplest way to self-host that model ????. Ollama Architecture.
Ollama consists of two main components: a client and a server.
The client is what the user interacts with, which could be, for example, a CLI. While experimenting with Ollama, our team found a critical security vulnerability in an Ollama server.
It is important to mention that Ollama does not support authentication out-of-the-box.
It is generally recommended to deploy Ollama behind a reverse-proxy to enforce authentication, if the user decides to expose its installation.
One of the endpoints,/api/pull, can be used to download a model from an Ollama registry.
Security teams should update their Ollama instances to the latest version to mitigate this vulnerability.
It is recommended not to expose Ollama to the internet unless it is protected by some sort of authentication mechanism, such a reverse-proxy.
We responsibly disclosed this vulnerability to Ollama's development team in May 2024.
Ollama promptly investigated and addressed the issue while keeping us updated.
May 5, 2024 - Ollama acknowledged the receipt of the report.
May 5, 2024 - Ollama notified Wiz Research that they committed a fix to GitHub.
Ollama committed a fix in about 4 hours after receiving our initial report, demonstrating an impressive response time and commitment to their product security.
This Cyber News was published on packetstormsecurity.com. Publication date: Wed, 26 Jun 2024 19:13:05 +0000