Meta’s AI training may have exposed your personal information to contract workers.
To fine-tune its AI models, Meta hires outside contractors to read conversations between users and its chatbot, which has one billion monthly active users as of May, according to the company. Now, some contractors are claiming to have viewed personal data while reading and reviewing exchanges.
Four contract workers hired through training companies Outlier and Alignerr told Business Insider that they repeatedly saw Meta AI chats that contained the user’s name, phone number, email address, gender, hobbies, and personal details. The information was either included in the text of a conversation by a user, which Meta’s privacy policy warns against, or it was given to contractors by Meta, which placed the personal data alongside chat histories.
Related: Meta Takes on ChatGPT By Releasing a Standalone AI App: ‘A Long Journey’
One worker said that they saw personal information in more than half of the thousands of chats they reviewed per week. Two other workers claimed to have seen selfies that users from the U.S. and India sent to the chatbot.
Other big tech companies, like Google and OpenAI, have asked contractors to train AI in a similar manner and tackle comparable projects, but two of the contract workers who also juggled tasks for other clients claimed that Meta’s tasks were more likely to include personal information.
All of the contractors expressed that users engaged in personal discussions with Meta’s AI chatbot, ranging from flirting with the chatbot to talking about people and problems in their lives. Sometimes, users would incorporate personal information into these interactions, adding their locations and job titles, which the contractors were later able to see.
Related: ‘The Market Is Hot’: Here’s How Much a Typical Meta Employee Makes in a Year
While the situation may pose a significant data privacy risk, Meta claims that it has imposed “strict” measures around employees and contractors who see personal data.
“We intentionally limit what personal information they see, and we have processes and guardrails in place instructing them how to handle any such information they may encounter,” a Meta spokesperson told Business Insider.
Meta has plans to invest heavily in AI. The company committed $66 billion to $72 billion in 2025 for AI infrastructure.
Join top CEOs, founders and operators at the Level Up conference to unlock strategies for scaling your business, boosting revenue and building sustainable success.
Meta’s AI training may have exposed your personal information to contract workers.
To fine-tune its AI models, Meta hires outside contractors to read conversations between users and its chatbot, which has one billion monthly active users as of May, according to the company. Now, some contractors are claiming to have viewed personal data while reading and reviewing exchanges.
Four contract workers hired through training companies Outlier and Alignerr told Business Insider that they repeatedly saw Meta AI chats that contained the user’s name, phone number, email address, gender, hobbies, and personal details. The information was either included in the text of a conversation by a user, which Meta’s privacy policy warns against, or it was given to contractors by Meta, which placed the personal data alongside chat histories.
The rest of this article is locked.
Join Entrepreneur+ today for access.