Facebook hires people to snoop at your posts to train AI to snoop better

Reuters is reporting that over the past year, a team of 250+ contract workers in India, have been going through the posts of millions of Facebook users. Including photos, status updates and other content posted since 2014.

Include are the subjects of the post and trying to ascertain the author’s intention of said posts. The work, according to Facebook is aimed at understanding users posts to its services and help the company develop new features, with the final aim of increasing usage and ad revenue.

The work along with other “content labeling” projects that Facebook has at the moment, employ thousands of people. Many of these projects are aimed at “training” the underling software that determines what users see ion their news feeds and which ads are show at a particular time.

This program along with previous miss-steps add fuel to the fire, surrounding the rise of privacy issues at Facebook. The company is already facing worldwide investigations over other privacy abuses involving the sharing of user data with business partners. And an almost constant stream of news about the lack of overall security of the platform in general. Such as the recent news Facebook kept user passwords and other data in plain text. Making it much easier to steal users information.

A Facebook spokeswoman said: “We make it clear in our data policy that we use the information people provide to Facebook to improve their experience and that we might work with service providers to help in this process.”

Facebook launched the project in April of 2018. The Indian firm is being paid $4 million and employs 250+ “labelers” , according to the company. The contracted work consisted of analyzing posts from the past five years.

After completing this initial work the team was cut down to 30 workers who now concentrate on “labeling” posts from the previous month. Work is expected to continue through the end of 2019, at least.

Other Facebook’s “labeling” include looking for sensitive topics or offensive language in videos posted to the platform. The reason seems to be more training of automated Facebook tools that help advertisers avoid sponsoring videos that are adult or political in nature.

Other uses could also be to better target users using their marketplace feature, where AI automated recommendations for new listings will be targeted to users based on their past posts.

From the report there seems to be no way for Facebook users to stop this data collection even with their private posts between friends and family..