Facebook's Data Dilemma

Authoring a tech post on the Guardian this past Tuesday, Antonio Garcia-Martinez, a former product manager at Facebook, explains how he "was charged with turning Facebook data into money, by any legal means":

Converting Facebook data into money is harder than it sounds, mostly because the vast bulk of your user data is worthless. Turns out your blotto-drunk party pics and flirty co-worker messages have no commercial value whatsoever.

But occasionally, if used very cleverly, with lots of machine-learning iteration and systematic trial-and-error, the canny marketer can find just the right admixture of age, geography, time of day, and music or film tastes that demarcate a demographic winner of an audience. The “clickthrough rate”, to use the advertiser’s parlance, doesn’t lie.

Yadda yadda, we've heard this all before. It's how most ad platforms operate these days -- harnessing machine-learning and all sorts of other [likely] hobbled together algorithms that provide conduits for proprietary data to advertisers and agencies to use in various campaigns to micro-target audiences and potential customers.

This is probably where privacy advocates should come shouting that this is a misuse of personal data. But is it? Facebook has provided its users a free service monetized by users' own tenacity to share and provide Facebook (and, subsequently, its advertisers) everything about themselves. While you could argue that some of the data provided is "personally identifiable information" (PII), Facebook hasn't forced you to share that information. And since users provide that information, Facebook can more or less do what it wants with it. Garcia-Martinez tends to agree, arguing that processing profile traits and post contents to inform demographic and audience triggers can easily be done with programming, so should its application matter to the masses?

The hard reality is that Facebook will never try to limit such use of their data unless the public uproar reaches such a crescendo as to be un-mutable. Which is what happened with Trump and the “fake news” accusation: even the implacable Zuck had to give in and introduce some anti-fake news technology. But they’ll slip that trap as soon as they can. And why shouldn’t they? At least in the case of ads, the data and the clickthrough rates are on their side.

There's also a link to another Guardian post that discusses how Facebook shares teens' emotional states with advertisers (likely derived by some kind of algorithm-based sentiment model). If we've learned anything at all about algorithms, it's that they can misinform as often as they can inform. A user uproar could certainly change the fate of data sharing with advertisers, but I don't see this happening until something truly offensive occurs, probably akin to Target's mishap a few years ago. And even that won't stop the use of data to inform advertising campaigns and the marketing of products/services on these platforms. The temptation (and intrinsic need) to use data is too fierce. And the rate of engagement on these platforms, with the amount of information being provided on a daily basis, is unprecidented by anything similar in human history.

While platforms like Facebook continue to require our attention to survive, they increasingly also need us to provide data to feed its monetary engine. The two are almost inexplicably tied together. Time and tolerance will tell how this shakes out.

The Trials of Deleting Uber

Uber's public image has had a hell of a first quarter. I can't recall the last tech company in recent history that ran into shitstorm after shitstorm as reliably and as damningly as they have. In today's New York Times, there's a profile on Uber CEO Travis Kalanick by Mike Isaac that details some of these tribulations, among them them a confrontation with Apple's CEO, Tim Cook. Notably, Uber had attempted to obfuscate from Apple its nefarious practices around user location-tracking and device-identifying (called "fingerprinting"). This practice would allow Uber to identify an individual iPhone even after the app was deleted and/or the phone reset. If it sounds egregious, it is. As The Verge points out, this is more of the same deceptive bullshit Uber has pulled off in recent years, including “evad[ing] government regulators and track[ing] rival drivers, track[ing] customers without permission, and being sued for allegedly stealing proprietary information regarding self-driving cars from Alphabet’s Waymo. “

Can most of this be blamed on the CEO? According to that profile, probably:

But the previously unreported encounter with Mr. Cook showed how Mr. Kalanick was also responsible for risk-taking that pushed Uber beyond the pale, sometimes to the very brink of implosion.

Crossing that line was not a one-off for Mr. Kalanick. According to interviews with more than 50 current and former Uber employees, investors and others with whom the executive had personal relationships, Mr. Kalanick, 40, is driven to the point that he must win at whatever he puts his mind to and at whatever cost — a trait that has now plunged Uber into its most sustained set of crises since its founding in 2009.

As long as deleting apps and still having the potentiality of being tracked by the deleted company is a threat to privacy and security, I hope technology gate companies like Apple continue to fight the good fight.

Update (APRIL 24, 2017)

Additional speculation (and clarification) from the fallout of the New York Times profile article from John Gruber (Apple pundit extraordinaire):

That sounds like Uber was doing the identifying and “tagging” (whatever that is) after the app had been deleted and/or the device wiped, but I think what it might — might — actually mean is merely that the identification persisted after the app had been deleted and/or the device wiped. That’s not supposed to be technically possible — iOS APIs for things like the UDID and even the MAC address stopped reporting unique identifiers years ago, because they were being abused by privacy invasive ad trackers, analytics packages, and entitled shitbags like Uber. That’s wrong, and Apple was right to put an end to it, but it’s far less sensational than the prospect of Uber having been able to identify and “tag” an iPhone after the Uber app had been deleted. The latter scenario only seems technically possible if other third-party apps were executing surreptitious code that did this stuff through Uber’s SDK, or if the Uber app left behind malware outside the app’s sandbox. I don’t think that’s the case, if only because I don’t think Apple would have hesitated to remove Uber from the App Store if it was infecting iPhones with hidden phone-home malware.

John's whole piece is worth reading if you want much clarity on what Uber was presumably doing. Curious what their tactics were/are for other phone manufacturers.

delete_uber

"Nobody's Got to Use the Internet"

We heard some fighting words from US Rep. Jim Sensenbrenner (R-Wis.) this week, a stocky old man defending why he contributed to the elimination of privacy rules for Internet Service Providers (ISPs), which affect all Americans living in this country. I quote: "Nobody's got to use the Internet."

He went on to say that if you regulated the Internet like a utility, "we wouldn't have the Internet". His nonsensical retort to his constituents proves an incredulous disconnect between our elected officials and the reality of our country's people. This is typical Republican rhetoric applied to what should be a nonpartisan issue. The Internet is woven into the fabric of our society, and throwing blanket statements like it's optional for anyone in this country to use it is unfathomably stupid. Perhaps for an old man, using the Internet is not nearly as intrinsic to living day-to-day as it is for the rest of us, but it is concerning that such a man is contributing to the rules that govern our privacy and the public utility that is the Internet.

The ruling is disappointing, and comes at a crucial time in our democracy where the intersection of connected devices, surveillance, and our right to privacy and dignity has become an increasing important fork in political decision-making. It will continue to be an area requiring, justifiably, government regulation. No one is saying choice is a bad thing here, but applying such rationale to ISPs' clamoring for advertising "innovation" is ridiculous. ISPs are feeling pressure from advertising giants like Facebook and Google, and are begging (sorry, lobbying) to gain a foothold to justify their existence as something more meaningful than being an expensive pipe to the Internet. We also can see how well this strategy is working for Verizon and AT&T, both telecommunications behemoths that are investing heavily in content and lobbying hard against net neutrality to justify business expansion to their shareholders since they've sunken into a similar dilemma.

The bullshit doesn't end here.

US Rep Jim Sensenbrennar (R-Wis)

The NSA & CIA Fail the American People

Remember the Apple iPhone / San Bernardino case from early 2016? Here’s a recap:

The F.B.I. has been unable to get into the phone used by Syed Rizwan Farook, who was killed by the police along with his wife after they attacked Mr. Farook’s co-workers at a holiday gathering. Reynaldo Tariche, an F.B.I. agent on Long Island, said, “The worst-case scenario has come true.”

But in order to unlock the iPhone, which Apple couldn’t simply “do” because of the passcode implementation used by Farook, a legal dispute ensued whereby the FBI demanded Apple build a backdoor to the “single” device.

Behind the scenes, relations were tense, as lawyers for the Obama administration and Apple held closely guarded discussions for over two months about one particularly urgent case: The F.B.I. wanted Apple to help “unlock” an iPhone used by one of the two attackers who killed 14 people in San Bernardino, Calif., in December, but Apple was resisting.

When the talks collapsed, a federal magistrate judge, at the Justice Department’s request, ordered Apple to bypass security functions on the phone. The order set off a furious public battle on Wednesday between the Obama administration and one of the world’s most valuable companies in a dispute with far-reaching legal implications.

There were two binary sides to this case.

  1. Apple’s case: To some, this was the pro-privacy side of the case. Why not create a quick backdoor to the phone for the US government, and then close it up? In Apple own words: “Some would argue that building a backdoor for just one iPhone is a simple, clean-cut solution. But it ignores both the basics of digital security and the significance of what the government is demanding in this case.” You create one backdoor for the US Government, then what? You’ve created a backdoor for all iPhone iOS users of the same version, and it could be used over and over again. It also sets what should be obvious: a dangerous precedent for the security of iPhone users and the power of the US Government. As the Washington Post makes explicitly clear,1 “This is an existing vulnerability in iPhone security that could be exploited by anyone.”
  2. The US Government’s case:2 Create a “key”, essentially a backdoor into the terrorist’s iPhone, to unlock whatever data is in there (if there’s anything to find at all), and as with #1’s concerns, endanger one of the most used mobile devices on the planet. If the data helps the case, great. If, that is.

Okay, so what happened again? The FBI lost the chance to decrypt the phone via Apple, but apparently “may have found way to unlock San Bernardino shooter's iPhone” anyway. Specifically, this single iPhone and not the other ones. Whatever technical means was found, it isn’t clear, but this maneuver spared a massive security risk across all iPhones.

If the FBI would have gotten its way, though, the most recent news about both the NSA and CIA would have hit even harder. And that’s saying something, because there are a few massive pieces of news that crept out recently that are entirely related to the FBI’s request from last year.

As we’ve been finding out, when US Government agencies aim to have tools to monitor terrorists or its own citizens, they rely heavily on finding (or buying) vulnerabilities in software and devices, or creating exploits (essentially malware) for physical exploitation of such devices. This unraveling began in March of this year, when WikiLeaks began positing redacted documents freshly acquired. Without getting into the weeds (you can read up on it if you so desire), the NSA leaks have been confirmed as legitimate, and they keep unspooling concern to security experts and software developers the world over.

The latest concerns coming out of this are a series of newly found exploits deployed by the NSA to attack computers using pre-Windows 10 operating systems (roughly 65%+ of all desktops on the planet). There is one in particular, called FUZZBUNCH, that can automate the deployment of NSA malware and would allow a member of the agency to easily (from their desk) infect a target computer. As reported by the Intercept:

According to security researcher and hacker Matthew Hickey, co-founder of Hacker House, the significance of what’s now publicly available, including “zero day” attacks on previously undisclosed vulnerabilities, cannot be overstated: “I don’t think I have ever seen so much exploits and 0day exploits released at one time in my entire life,” he told The Intercept via Twitter DM, “and I have been involved in computer hacking and security for 20 years.” Affected computers will remain vulnerable until Microsoft releases patches for the zero-day vulnerabilities and, more crucially, until their owners then apply those patches.

“This is as big as it gets,” Hickey said. “Nation-state attack tools are now in the hands of anyone who cares to download them…it’s literally a cyberweapon for hacking into computers…people will be using these attacks for years to come.”

Yes, the cybertools used by our government’s agencies have been compromised, and are now available to anyone. While we’re sure Microsoft is working on patches, this is what happens when governments have access to exploits and backdoors into software that can, sequentially, endanger people’s most valuable information. While this is still about digital privacy, it’s also about security. What will it take for citizens to take notice of monumental weight of these leaks, these compromises? An attack on their credit cards? Their mortgage? Their identities?

This Doesn’t Seem Fine

A great piece by Vice’s Motherboard further extrapolates on this topic, essentially warning that it’s foolish and naive to assume any government official or contractor can keep cybertools safe. Here’s another way of thinking about this: let’s turn to the master key TSA agents have, granting them the ability to unlock any piece of luggage (with a TSA-approved lock). Well, as you may know, that key was compromised, and you can now download CAD files to get your own version 3D-printed. Imagine that. Anyone can get into anyone else’s luggage. But who would take the time to print one of these keys? Probably someone with malicious intent. And if you apply this same concept to master keys for software, apps, banking systems, etc., would you still trust the US Government (or any other government) to keep that key safe? To not misuse it?

Security and privacy in a digital context are becoming more intrinsically attached, as nearly every compromise to the former affects the latter. As my friend Eric mentioned in a recent email exchange, we may be seeing privacy become a third-rail issue in Washington. As unfathomable as it may seem, privacy doesn’t appear to be a non-partisan issue. We’ve already seen recently the reversal of ISP data privacy restrictions, even though Comcast tries to reassure us that they won’t sell our “individual” data (they will likely sell pools of data so advertisers can create look-a-like models and advertise to individuals anyway, or target individuals with their own ad network based on browsing history), Republicans seem to be more prone to manipulation by telecommunications lobbyists. Or maybe they just don’t give a shit about the digital privacy and security of the American people.

Let’s hope the recent leaks of cyber tool information makes enough headlines to reach the (mostly) non-news reading American populace, and that they take the time to understand the consequences of what can happen when we put too much trust and power in the hands of our governments.

Update

Microsoft has reported that "most of the exploits that were disclosed fall into vulnerabilities that are already patched in our supported products", and "of the three remaining exploits [...] none reproduces on supported platforms, which means that customers running Windows 7 and more recent versions of Windows or Exchange 2010 and newer versions of Exchange are not at risk".

As always, keep your software and operating system updated to the latest version.

  1. This article is a good read, as it complements Apple’s letter and explains the intricacies of what is really being requested ↩︎
  2. No, I didn’t complete the reading of this article, but we’ll assume it covers “both sides of the story”, amiright. ↩︎