A major security vulnerability at the Chinese AI company DeepSeek has raised serious concerns about data privacy and cybersecurity. Security researchers from Wiz Research discovered that a DeepSeek database was left completely unprotected on the internet. This led to the exposure of sensitive user data and internal company info.
What Was Exposed?
The exposed ClickHouse database held more than a million data sets, with some very private info. The leak included full chat logs, internal API keys, and deep details of DeepSeek’s system. Worse, is the fact that the database gave full control over data tasks, which could lead to more harm.
The database was accessible via the addresses **oauth2callback.deepseek.com:9000** and **dev.deepseek.com:9000**, with no authentication required. This meant that anyone with basic technical knowledge could execute SQL queries and access the data.
How Was the Vulnerability Discovered?
According to a blog post by Wiz Research, the security team discovered the unprotected database within minutes while conducting a routine check of DeepSeek’s external security posture. The researchers emphasized that while such vulnerabilities are not uncommon, the scale and severity of this incident are particularly concerning.
Ami Luttwak, Chief Technology Officer of Wiz, described the incident as a “dramatic mistake,” highlighting the low effort required to exploit the vulnerability and the high level of access it provided. He cautioned that DeepSeek’s services are “not mature enough to be used with sensitive data.” He said
The fact that mistakes happen is correct. But this is a dramatic mistake, because the effort is very low and the level of access we have received is very high.
This event is a clear sign of the risks tied to fast growth in AI tech. While many talks on AI safety focus on future harms, the true risks often come from simple mistakes, like open data files. Gal Nagli, a security expert at Wiz also emphasized the importance of addressing fundamental risks. “The real risks often come from fundamental issues—like the accidental external exposure of databases,” he said.
DeepSeek’s Response
DeepSeek acted swiftly to address the issue, closing the vulnerability within an hour of being notified. However, it remains unclear whether unauthorized third parties accessed the data during the time it was exposed.
This event may hurt trust in DeepSeek, as the firm had just won fame for its AI tool, DeepSeek-R1. This tool can match GPT-4, the once most powerful generative AI in skill and price. The leak of key data also casts doubt on the firm’s ability to handle user info and web safety. Furthermore, the DeepSeek data leak shows why strong web safety is a must for AI work. As AI firms grow and add new tech, they must keep user info safe to stay sound. For now, this case is a sign: when it comes to web safety, doubt is wise.