Authoring a tech post on the Guardian this past Tuesday, Antonio Garcia-Martinez, a former product manager at Facebook, explains how he "was charged with turning Facebook data into money, by any legal means":
Converting Facebook data into money is harder than it sounds, mostly because the vast bulk of your user data is worthless. Turns out your blotto-drunk party pics and flirty co-worker messages have no commercial value whatsoever.
But occasionally, if used very cleverly, with lots of machine-learning iteration and systematic trial-and-error, the canny marketer can find just the right admixture of age, geography, time of day, and music or film tastes that demarcate a demographic winner of an audience. The “clickthrough rate”, to use the advertiser’s parlance, doesn’t lie.
Yadda yadda, we've heard this all before. It's how most ad platforms operate these days -- harnessing machine-learning and all sorts of other [likely] hobbled together algorithms that provide conduits for proprietary data to advertisers and agencies to use in various campaigns to micro-target audiences and potential customers.
This is probably where privacy advocates should come shouting that this is a misuse of personal data. But is it? Facebook has provided its users a free service monetized by users' own tenacity to share and provide Facebook (and, subsequently, its advertisers) everything about themselves. While you could argue that some of the data provided is "personally identifiable information" (PII), Facebook hasn't forced you to share that information. And since users provide that information, Facebook can more or less do what it wants with it. Garcia-Martinez tends to agree, arguing that processing profile traits and post contents to inform demographic and audience triggers can easily be done with programming, so should its application matter to the masses?
The hard reality is that Facebook will never try to limit such use of their data unless the public uproar reaches such a crescendo as to be un-mutable. Which is what happened with Trump and the “fake news” accusation: even the implacable Zuck had to give in and introduce some anti-fake news technology. But they’ll slip that trap as soon as they can. And why shouldn’t they? At least in the case of ads, the data and the clickthrough rates are on their side.
There's also a link to another Guardian post that discusses how Facebook shares teens' emotional states with advertisers (likely derived by some kind of algorithm-based sentiment model). If we've learned anything at all about algorithms, it's that they can misinform as often as they can inform. A user uproar could certainly change the fate of data sharing with advertisers, but I don't see this happening until something truly offensive occurs, probably akin to Target's mishap a few years ago. And even that won't stop the use of data to inform advertising campaigns and the marketing of products/services on these platforms. The temptation (and intrinsic need) to use data is too fierce. And the rate of engagement on these platforms, with the amount of information being provided on a daily basis, is unprecidented by anything similar in human history.
While platforms like Facebook continue to require our attention to survive, they increasingly also need us to provide data to feed its monetary engine. The two are almost inexplicably tied together. Time and tolerance will tell how this shakes out.