Tea, a viral women-only dating app, suffered a significant data breach when hackers discovered its unsecured database, exposing over 72,000 private images, including selfies and government IDs. Users were required to upload these documents for verification to prevent fake accounts and enhance safety. However, the breach revealed that the app's claims of protecting women actually resulted in the exposure of their personal information. The leaked data, which includes tens of thousands of private messages, became searchable on various platforms soon after the breach was reported. The original hacker attributed this incident to inadequate security measures, suggesting that the app's development was influenced by 'vibe coding'—a trend where developers rely on AI-generated code without thoroughly reviewing it for security flaws. This incident highlights ongoing concerns regarding the safety of user data in applications leveraging generative AI, as significant portions of AI-generated code have been found to contain exploitable vulnerabilities. The company acknowledged that the data was stored for compliance with cyber-bullying prevention but has not responded to requests for further comments.

Source 🔗