Our last post about Facebook’s proposed new rules was focused on the commercialization of Facebook user content. That’s been the general focus of the discussion of Facebook’s action. Take this from the Wall Street Journal for example.
There is something else to discuss though. In a previous post here at Big Data and the Law we raised the issue of discovered or inferred information. In particular, we referenced a study conducted with Facebook data. That study proved it is possible to use disclosed personal information to discover (or infer) additional and undisclosed personal information.
With that in mind, what seems to be missing from the discussion of Facebook’s proposed new rules is Facebook’s addition of the word infer. It appears in several places in Facebook’s proposed new Data Use Policy, so one has to assume there was some thought and purpose behind it. What might that be?
Perhaps Facebook learned something from that study about inferred information. Perhaps Facebook learned that they could create a whole universe of personal information that was never actually disclosed to Facebook, and they decided it would be great if they could use it.
So here is where Facebook added infer.
So we can show you content that you may find interesting, we may use all of the information we receive about you to serve ads that are more relevant to you. For example, this includes:
information you provide at registration or add to your account or timeline,
things you share and do on Facebook, such as what you like, and your interactions with advertisements, partners, or apps,
keywords from your stories, and
things we infer from your use of Facebook.
For many ads we serve, advertisers may choose their audience by location, demographics, likes, keywords, and any other information we receive or infer about users. Here are some of the ways advertisers may target relevant ads:
demographics and interests: for example, 18 to 35 year-old women who live in the United States and like basketball;
topics or keywords: for example, “music” or people who like a particular song or artist;
Page likes (including topics such as products, brands, religion, health status, or political views): for example, if you like a Page about Gluten-free food, you may receive ads about relevant food products; or
categories (including things like “moviegoer” or a “sci-fi fan”): for example, if a person “likes” the “Star Trek” Page and mentions “Star Wars” when they check into a movie theater, we may infer that this person is likely to be a sci-fi fan and advertisers of sci-fi movies could ask us to target that category.
And it’s here:
We use the information we receive, including the information you provide at registration or add to your account or timeline, to deliver ads and to make them more relevant to you. This includes all of the things you share and do on Facebook, such as the Pages you like or key words from your stories, and the things we infer from your use of Facebook.
The inference of personal data is going to be a huge deal at some point. In the present situation with Facebook’s proposed new rules, it’s a subtle thing – likely because Facebook’s is seeking only a limited right to inferred information, and of course because the more obvious issues are so troublesome. When the inferred information issue becomes more widely discussed though, people will start becoming concerned. As noted in our previous post, that has started to happen in Europe.