Google has been recently entangled in a user privacy related lawsuit where the tech giant may have to pay $150-$200 million to settle allegations. According to the details of the lawsuit, Google’s YouTube service violated children’s privacy law, by gathering data on users under the age of 13 without obtaining permission from parents. Clearly, Google needs to enhance its data-privacy standards and the company is working to improve its data privacy policies.
The tech giant announced a new initiative to develop a set of open standards to fundamentally enhance privacy on the web, calling it a Privacy Sandbox– a secure environment for personalisation that also protects user privacy. There are positive upgrades as part of Privacy Sandbox such as the use of Trust API, a proposal based on Privacy Pass, that will use trust tokens to prove a given user is a human and therefore avoid users to filling CAPTCHAs. While, Privacy Sandbox seems a step in the right direction, experts have come to the front and said the tech giant has not done enough.
Researchers Criticise Google Argument For Not Blocking Tracking Cookies
According to Google, it has stressed its commitment to targeted advertising via cookies. The big tech company said that that blocking third-party cookies which is the most common tracking technology on the internet and which Google uses extensively to track users, damages user privacy. The rationale is that by removing tools that make tracking easy, other browser companies will force trackers to use “opaque techniques” like fingerprinting. Google stated that “large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting.”
“Google’s claim to the contrary is privacy gaslighting because it’s an attempt to persuade users and policymakers that an obvious privacy protection—already adopted by Google’s competitors. There is little trustworthy evidence on the comparative value of tracking-based advertising. Google has not devised an innovative way to balance privacy and advertising; it is latching onto prior approaches that it previously disclaimed as impractical,” stated researchers.
Google’s Proposal For Conversion Measurement
Recently, Apple cracked down on advertisers’ usage of cookies by deploying Intelligent Tracking Prevention (ITP), and proposed a privacy preserving solution for ad attribution where ad metadata will create a unique identity of the user. Google has proposed the same kind of solution including a destination URL, a reporting URL, and a field for extra “impression data” to create a unique id for a campaign id. The metadata will be stored in a global ad table and whenever the user visits a specific URL through an ad, the browser will make a request to the URL, asking them to report that a given ad was converted.
This is not a bad move in theory because it measures a campaign’s success rate without recording individually-identifying data through cookies. But, according to the Electronic Frontier Foundation, there may be concerns about the size of the impression data that Google has proposed on collecting, which us is 64 bits of data— a number between 1 and 18/quintillion. This is tremendously huge if compared to Apple’s proposal of using just 6 bit of information for campaign ID metrics. Using a 64-bit register as proposed by Google can allow advertisers to have unique IDs for all each and every impressions they have, which can then be used to connect different conversions with specific user profiles. This can happen especially when a user clicks on different ads from a single advertiser, and the IDs can help create a browsing profile of that user.
Google Proposes Federated Learning of Cohorts For Browser Data Analysis
Another technology that some are concerned about is Google’s proposal for Federated Learning of Cohorts (or “FLoC”) based on Google’s federated learning technology. The technology helps create local machine learning models using small pieces of information at a given time and also preserve privacy details of the user. Federated learning systems can be configured to use secure multi-party computation and differential privacy in order to keep raw user data verifiably private. According to Google, Federating Learning will prevent revealing that a particular user is a member of a group with similar products likings until there are thousands of users in the group. While this is an upgrade from the current practise, experts have identified problems with it and are concerned about its use.
Experts on web privacy say that the problem with FLoC isn’t the process, it’s the product. FLoC would use Chrome users’ browsing history to do clustering. The technology will analyse browsing patterns and create groups of similar users, then assign individual users to specific groups, known as flocks. At the end, each browser will have a “flock name”, identifying it as a particular type kind of internet users. Even though the precise identity of a user is hidden, a flock name, shown as an HTTP header, can serve as a digital signature detailing the patterns of users. This is a concern for privacy and advertisers can exploit flock names according to their own set of user profiling.
Users and lawmakers continue to demand better privacy protections from Big Tech, Google’s Privacy Sandbox have been bashed by some as a half-baked effort. This is more apparent when you compare its efforts of Apple and Mozilla, both of which have brought in better privacy preserving features with Safari’s Intelligent Tracking Prevention and Firefox’s Enhanced Tracking Protection. Also, Google’s argument that restricting third party cookies will end up worsening user privacy has been touted as a bad argument.