The women-only dating app Tea suffered a significant data breach after its unsecured database was discovered by hackers, exposing over 72,000 users' private images, including selfies and government IDs. The breach involved not only verification documents but also private messages, all of which become searchable online. Despite its marketing as a safe space for women, the app's lax security measures—described as 'vibe coding' by hackers—contributed to the incident. Users were required to upload IDs and selfies for verification to keep out fake accounts, which ironically led to their information being leaked. The company had stated the data was stored for law enforcement compliance regarding cyber-bullying prevention, but this breach raises serious concerns about security practices in tech companies that utilize generative AI for app development. As users scramble to secure their identity and monitor potential misuse of their data, experts warn of the vulnerabilities tied to AI-generated code, emphasizing the need for comprehensive security evaluations beyond initial development.

Source 🔗