Meta AI App Raises Privacy Red Flag with Public Sharing of User Chats

Meta AI App Raises Privacy Red Flag with Public Sharing of User Chats

Share On:

Facebook
Twitter
LinkedIn

Prime Highlights 

  • Meta’s new AI app is criticized for unintentionally sharing users’ private conversations on a public feed.
  • Users posted confidential information unaware that their conversation was open to the public.

Key Facts 

  • Meta launched the AI app in April 2025 with a feature titled “Discover” that publicly displays user-AI conversations.
  • Many users inadvertently posted sensitive discussions from health to legal issues and personal affairs.
  • Meta later implemented a warning pop-up, although concerns about the app design and the level of awareness among users still persist.

Key Background 

In April 2025, Meta launched its AI-powered chatbot app focusing on blending conversational AI at the forefront with the social networking feature. One of the app’s biggest features is the “Discover” feed, where users have the option to publish their conversations with the AI publicly for others to read, reply to, or comment on. While the feature was designed to feature interesting exchanges or creative conversations, it instantly became controversial.

Most of the users were unaware that whatever they were uttering was being exposed. These were not typical discussions—some consisted of extremely personal and intimate topics like mental illness problems, legal issues, relationship counseling, and even diseases. In others, people even employed actual names, phone numbers, or other identifying information, believing that they were within an undercover setting.

The uncertainty comes primarily from the user interface of the app, which allowed posting a conversation to Discover with as little as one tap, and with little or easily dismissed warning. Because the app is linked to users’ existing Meta accounts (e.g., Facebook or Instagram), some of these posts could be tracked back to real-life identities, thus increasing the privacy risks.

Following widespread criticism and press, Meta instituted a clearer warning system. Today, when a user tries to share a chat to the Discover feed, there is a confirmatory prompt that explicitly warns that the conversation will be public. Privacy activists, however, argue that the app’s design overall encourages sharing, a tactic called a “dark pattern.”

Despite these changes, the issue has generated more universal concerns about AI tools and internet privacy. Users are now using AI bots as they would a therapist or confidant, and therefore these sites are dealing with very sensitive information. Critics say that it’s the responsibility of the tech giants to inform users fully about how their information is used and displayed.

Meta’s saga about its AI app is similar to previous controversies involving the treatment of data by the company and may attract the attention of international privacy regulators, especially in regions with stringent data protection laws.

Related Posts
Latest Edition
Recent Posts