Thousands Of AI-Generated and Deepfake Images Exposed in Unprotected Database Online

Alina BÎZGĂ

April 01, 2025

Promo Protect all your devices, without slowing them down.
Free 30-day trial
Thousands Of AI-Generated and Deepfake Images Exposed in Unprotected Database Online

There’s no telling what one may stumble upon when scouring the web. Recently, cybersecurity researcher Jeremiah Fowler uncovered a database housing nearly 100,000 pieces of AI-generated content, including face-swapped images of mature content and illegal images depicting minors, without password protection.

According to Fowler, the database belonged to GenNomis by AI-Nomis, a South Korean AI company, and contained 93,485 files totaling roughly 48 GB in total size. It featured a variety of AI-generated or manipulated images, along with command prompts used to create them. Although there was no immediate evidence of personal identification information (PII), the type of content and the sheer volume of files raised concerns over both privacy and the potential for misuse – the content was predominantly mature and included AI-generated imagery and user command prompts.

A portion of the sample also included well-known public figures, for which AI technology generated lifelike images without their authorization.

“Although I did not see any PII or user data, this was my first look behind the scenes of an AI image generator. It was a wake-up call for how this technology could potentially be abused by users, and how developers must do more to protect themselves and others,” noted Fowler.

In his report, the researcher said he promptly reported his findings to the company, and that the database was taken offline. However, Fowler highlighted the potential misuse of image manipulation features, the lack of user consent, and the possibility of non-consensual content being uploaded to the platform. He also mentioned how explicit images generated by this type of technology could be used for extortion, reputation damage, or other malicious purposes.

“There are numerous AI image generators offering to create pornographic images from text prompts, and there is no shortage of explicit images online for the AI models to pull from,” Fowler explained. “Any service that provides the ability to face-swap images or bodies using AI without an individual’s knowledge and consent poses serious privacy, ethical, and legal risks.”

According to Wired, when the GenNomis website was operational, it openly permitted explicit AI adult imagery. The website also included a “NSFW” gallery and a “marketplace” where users could share and potentially sell albums of AI-generated photos.

When images of recognizable people—celebrities or private individuals—are processed by AI systems, the results can be alarmingly realistic and might lead to reputational harm or other malicious outcomes.

Safety Measures: Protecting Your Likeness Online

While service providers are obliged to secure their platforms and enforce usage guidelines, individuals can also mitigate risks:

  1. Restrict High-Resolution Photos: Publicly posting high-quality images of your face makes it easier for AI tools to replicate your appearance. Consider using privacy controls on social media to limit exposure.
  2. Watermarks and Filters: Adding watermarks or using filters can make it more difficult for automated tools to isolate and reconstruct identifiable facial details.
  3. Monitor Mentions and Images: Periodically run searches for your name or likeness and set up online alerts to track unexpected appearances or suspicious uses.
  4. Stay Cautious with New Platforms: Review privacy policies before uploading images, and avoid sites that lack clear data-protection measures or secure HTTPS connections.
  5. Document Misuse: If you discover your likeness has been used without your consent, record URLs, screenshots, and other evidence. Consider contacting legal authorities if the content is deemed harmful or illegal.
  6. Use Digital Identity Protection Services: Bitdefender’s Digital Identity Protection service offers an additional layer of security by actively monitoring digital footprints, data breaches involving personal information and identifying potential impersonation attempts, notifying you of doppelganger or imposter accounts that use deepfake images to mimic your appearance or identity.

Disclaimer:
This article is intended for informational purposes only. It does not suggest wrongdoing by GenNomis, AI-NOMIS, or any of their associates. Any examples provided here highlight the importance of data security, AI-generated media, and preventive measures for personal privacy and protection.

tags


Author


Alina BÎZGĂ

Alina is a history buff passionate about cybersecurity and anything sci-fi, advocating Bitdefender technologies and solutions. She spends most of her time between her two feline friends and traveling.

View all posts

You might also like

Bookmarks


loader