Meta AI's Public 'Discover' Feed Raises Privacy Concerns as Users Unknowingly Share Prompts

Growth CompassGrowth Compass
3 min read

A new feature within Meta AI, allowing users to publicly share their AI prompts and responses on a "Discover" feed, is raising significant privacy concerns. While Meta states that sharing is opt-in, internet safety experts and a recent BBC investigation suggest many users may be inadvertently making their private queries public, with potential ramifications for their personal information and online reputation.

Meta AI, which launched earlier this year and is accessible through Facebook, Instagram, WhatsApp, and as a standalone app, includes a "Discover" feed designed for users to "share and explore how others are using AI." Meta emphasizes that users are "in control" and that "nothing is shared to your feed unless you choose to post it." A pop-up message also warns users before posting that "Prompts you post are public and visible to everyone... Avoid sharing personal or sensitive information."

However, the nature of some of the publicly shared content suggests a disconnect between Meta's explanation and user understanding. The BBC uncovered numerous instances where highly personal or sensitive queries were posted, potentially without the user realizing the public nature of the "Discover" feed. These examples include:

  • Academic Misconduct: Users uploading photos of school or university test questions and requesting answers from Meta AI. One such chat was even publicly titled "Generative AI tackles math problems with ease."

  • Sensitive Personal Queries: A user's conversation exploring questions about their gender identity and potential transition was found publicly available.

  • Questionable Content Generation: Searches for "scantily-clad characters" and anthropomorphic animal characters wearing minimal clothing were discovered. One particularly concerning instance involved a user, traceable via their username and profile picture to their Instagram account, asking Meta AI to generate an image of an animated character in only underwear.

Rachel Tobac, CEO of US cybersecurity company Social Proof Security, voiced strong concerns on X (formerly Twitter), calling it "a huge user experience and security problem." She highlighted the discrepancy between user expectations of a private AI chatbot and the reality of a public social media-style feed. "Because of this, users are inadvertently posting sensitive info to a public feed with their identity linked," Tobac stated.

While Meta allows users to make their searches private in their account settings and offers the option to withdraw a public post, the fundamental issue appears to be a lack of clarity regarding the "Discover" feed's public accessibility, especially given the private and often sensitive nature of AI interactions. The ease with which usernames and profile pictures can link public posts back to other social media accounts further exacerbates these privacy risks.

As Meta AI continues to roll out, currently available in the UK via browser and in the US via an app, the question remains whether users fully comprehend the implications of sharing their AI conversations. The incident serves as a crucial reminder for users to exercise extreme caution and review privacy settings when interacting with new AI tools, especially those with public-facing features.

0
Subscribe to my newsletter

Read articles from Growth Compass directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Growth Compass
Growth Compass

Growth Compass is a blog dedicated to providing valuable insights and strategies for business growth. We cover topics like business transformation, tax optimization, consulting, and workforce strategies, helping organizations navigate challenges and achieve sustainable success.