EU to Investigate Google and Facebook’s Data Collection Practices

EU to Investigate Google and Facebook’s Data Collection Practices

Learn more

The European Union has begun preliminary investigations into the data collection practices of Google and Facebook. The investigations are done to evaluate whether the two US tech firms are complying with the EU rules in the region.  

A spokesperson for the European Commission, the EU’s executive arm, told CNBC via email on Monday that ‘The Commission has sent out questionnaires as part of a preliminary investigation into Google’s and Facebook’s data practices. These investigations concern the way data is gathered, processed, used and monetized, including for advertising purposes.’

According to the spokesperson, the preliminary investigations are on-going. EU has previously investigated Google which has resulted in more than €8bn (£6.8bn) of fines. Google Shopping was investigated in 2017 which resulted in a fine of €2.4bn. In 2018, Google’s Android smartphone operating system involved anticompetitive practices and it resulted in a fine of €4.3bn. In 2019, due to advertising violations, Google was charged with a €1.5bn fine. These new investigations show that the EU isn’t done probing into Google. 

A spokesperson for Google told CNBC, “We use data to make our services more useful and to show relevant advertising, and we give people controls to manage, delete or transfer their data. We will continue to engage with the Commission and others on this important discussion for our industry.”

A Facebook spokesperson told CNBC via email on Tuesday, “Data helps us tailor our apps and services so each person’s experience is unique and personalized.” The spokesperson also added that Facebook is fully cooperating with the EU and are happy to answer any questions they might have.  

EU has previously investigated Amazon to figure out whether the e-retailer was complying with European rules on handling data from independent retailers. 

Margrethe Vestager, who is the EU’s competition chief, has led a wider crackdown on how tech giants operate across the 28 EU member states. She has urged Ireland to collect 13 billion euros ($14.34 billion) in unpaid taxes from Apple, fined Google in a number of cases and accused Facebook of misleading EU regulators over its takeover of WhatsApp. 

Millions of Twitter and Facebook Users May Have Had Their Accounts Compromised

Millions of Twitter and Facebook Users May Have Had Their Accounts Compromised

Learn more

Facebook and Twitter announced on Monday that the personal data of millions of users may have been improperly accessed after they used their social media accounts to log in to several Android apps, downloaded from the Google Play Store. 

Security researchers discovered that a mobile software development kit (SDK) named oneAudience gave third-party developers access to people’s personal data. This personal data includes email addresses. Usernames and most recent tweets of people who used their Twitter accounts to get access to such apps including Giant Square and Photofy. 

In a blog, Twitter informed the people of this gross misconduct and also said that this activity may make it possible for a hacker to take control of someone’s Twitter account but there is no evidence that this occurred. 

A Twitter spokeswoman, Lindsay McCallum said, 

“We think it’s important for people to be aware that this exists out there and that they review the apps that they use to connect to their accounts.”

Twitter also announced that it will be informing users who were affected. The company has also informed Google and Apple about the vulnerability so that further action can be taken. 

A Facebook spokesperson sent the following statement after the recent disclosure: 

“Security researchers recently notified us about two bad actors, oneAudience and Mobiburn, who were paying developers to use malicious software developer kits (SDKs) in a number of apps available in popular app stores. After investigating, we removed the apps from our platform for violating our platform policies and issued cease and desist letters against One Audience and Mobiburn. We plan to notify people whose information we believe was likely shared after they had granted these apps permission to access their profile information like name, email, and gender. We encourage people to be cautious when choosing which third-party apps are granted access to their social media accounts.”

This comes at a time when Facebook, Google, and Twitter are all facing heightened scrutiny from regulators concerning the use of personal data and its use by outside developers to track and target customers. The issue has been of particular concern ever since March 2018, ever since the Cambridge Analytica scandal. Cambridge Analytica accessed up to 87 million Facebook profiles in order to target ads for Donald Trump in the 2016 presidential election.  

A Facebook spokesperson told The Verge that the company encourages people “to be cautious when choosing which third-party apps are granted access to their social media accounts.” 

Facebook’s New AI Can Help You Circumvent Facial Recognition

Facebook’s New AI Can Help You Circumvent Facial Recognition

Learn more

Facial recognition technology is primarily used to detect and identify people but in a turn of events, Facebook has created a tool that fools the facial recognition technology to wrongly identify someone. Facebook’s Artificial Intelligence Research Team (FAIR) has developed an AI system that can “de-identify_ people in pre-recorded videos and live videos in real-time as well. 

This system uses an adversarial auto-encoder paired with a trained facial classifier. An auto-encoder is an artificial neural network that studies a description for a set of data unsupervised. Classifiers usually employ an algorithm to chart input data. The face classifier deals with data associated with facial images and videos. This slightly distorts a person’s face in such a way as to confuse facial recognition systems while maintaining a natural look that remains recognizable by humans. The AI doesn’t have to be retrained for different people or videos and there is only a little time distortion. 

According to Facebook, “Recent world events concerning advances in, and abuse of face recognition technology invoke the need to understand methods that deal with de-identification. Our contribution is the only one suitable for video, including live video, and presents a quality that far surpasses the literature methods.” 

Over the course of some years, deepfake videos have become common in which a person’s face can be edited into videos of other people. These deepfake videos have become so convincing and advanced that it can be difficult to tell the real ones from the fake ones.  

This de-identification program is built to protect people from such deepfake videos. 

In the paper, the researchers Oran Gafni, Lior Wolf, and Yaniv Taigman talked about the ethical concerns of facial recognition technology. Due to privacy threats and the misuse of facial data to create misleading videos, researchers have decided to focus on video de-identification. 

In principle, it works quite similarly to face-swap apps. This involves using a slightly warped computer-generated face through past images of them and then put on their real one. As a result, they look like themselves to a human but a computer cannot pick up vital bits of information which it could from a normal video or photo. You can watch this in action in this video here. 

According to the team, the software beat state-of-the-art facial recognition and was the first one to have done so on a video. It could also preserve the natural expressions of the person and was capable of working on a diverse variety of ethnicities and ages of both genders. 

Even though the software is extremely compelling, don’t expect this to reach Facebook anytime soon. Facebook told VentureBeat that there were no intentions to implement the research in its products. But with that said, the practical applications of the research are pretty clear. The software could be used to automatically thwart third parties using facial recognition technology to track people’s activity or create deepfakes. 

This research comes at such a time when Facebook is battling a $35 billion class-action lawsuit for alleged misuse of facial recognition data in Illinois. Facebook isn’t the only one working on de-identification technology. D-ID recently released a Smart Anonymization platform that allows clients to delete any personally identifying information from photos and videos.