Top News

How much is your sensitive info worth to Facebook? About $20

Facebook has been targeting teenagers and young adults with their VPN app “Research,” for 13- to 35-year-olds, that’s part of their overall “Project Atlas,” a far-reaching effort to gain insight into everyday lives and to detect potential emerging Facebook competitors.

If users install the app on their phone, and agree to the extra-complicated terms of service, they get $20 (in gift cards), and additional $20 payments for referring friends. Meanwhile, Facebook gets almost every single piece of sensitive data transmitted through their phones – including private messages, photos, web browsing activity and more. Facebook’s level of access to personal data and activity would make intelligence agencies such as the U.S. National Security Agency envious.

The imbalance of power here is astounding. But to cash-strapped teens who don’t understand just how much they’re giving away (and let’s face it – no one could understand the legaleze in these intentionally long, complex user agreements) – it seems like easy money. 

A rebrand of a banned app

The app lets Facebook suck in all the users’ phone and web activity, much like another app called Onavo that Apple banned last June. Research is basically a rebranded version of Onavo, meaning Facebook is still flagrantly flaunting the rules and knowingly undermining their relationship with Apple. 

Why is Facebook doing this? Simple: so they can figure out which competitors to kill, which to buy, and what new features to develop next. It’s extremely profitable for Facebook to glean info such as Amazon purchase history – which they actually did ask users to screencap for them – and create an accurate portrait of purchasing habits and other user trends, so they can foresee what their next steps should be in the big picture. 

They knew to buy WhatsApp, for example, because through Onavo’s tracking they discovered that there were twice as many conversations for that age group happening on WhatsApp compared with Facebook Messenger.  Not only did they know to buy it, they had an advantage in knowing how much it was truly worth and what they should pay for WhatsApp.

Tricky tracking

Facebook is going about all this with a disturbing level of surveillance that’s normally reserved for corporate security or government agencies.  

The Research app initially gives no clue that it’s connected to Facebook; that’s also intentionally misleading, because Facebook is well aware that teenagers are leaving their platform in droves, so if they can convince teens to download a seemingly unrelated app, they still get all that valuable data. 

They also used tools provided by Apple for app-testing purposes, not for mass surveillance purposes, violating not just users’ trust, but also their technology partners and providers’ trust.

There’s no way to give truly informed consent

Facebook always positions themselves as harmless or, at worst, incompetent, but after the last two years of their repeated abuses we know that’s simply not the case. They’re saying, “You’ve got nothing to hide, so download this app, help us improve our service, and get paid for it.” But you’re giving up your privacy for an insultingly low compensation.

And, there’s a risk should Facebook’s internal security practices be as bad as its privacy practices that your highly personal information could fall into the wrong hands.

Facebook will stop at nothing to leverage their monopoly to secure their market position.  

What can you do? Don’t give in! Get Facebook and affiliated apps off your phone, petition for privacy to be upheld in all levels of government, and push for lawmakers to finally hold Facebook accountable. 

Apple loses face with FaceTime bug

Apple may value user privacy more than the other tech giants, but even they aren’t immune to issues that compromise that privacy.  

In late January, a FaceTime group chat error let users hear audio from the person at the other end before they’d picked up. In some cases, the device also broadcast video. The audio and video functions were enabled early, in other words, making for an unintentional – but still very embarrassing – mistake on Apple’s part! 

A bug in the system

Your cool fact for the day: the root of the term “bug” comes from the early days of computing; real bugs would crawl into the original hole-punch-style computers from the mid-20th century, end up squashed over a hole, and screw up the programming.  

We now use the expression “bug” to refer to any unintentional software error.  

This FaceTime mistake was introduced in a software update, and only discovered recently.

Working out a fix

Intentions mean a lot – we know, at least, that this malfunction wasn’t perpetrated by a nation state or criminal group; it’s a bug, not a deliberate hack. 

On Monday, Apple said it was working on a software patch to solve the problem. They’d disabled the group chat functionality – meaning users could still chat one-on-one and their FaceTime app would still work – and Apple promised to push out an update to Mac and iOS devices to fix the flaw. On Friday, they apologized for the error. 

Do you really need to cover your webcam?

A good way to nip this kind of privacy issue in the bud is to cover the camera on your laptop, tablet and phone, either with a quick solution like electrical tape, or with an adhesive or attachable device specifically made to cover webcams. These cheap, quick options could save you a lot of hassle in the long run and give you some peace of mind.

Of course, this type of glitch is not specific to FaceTime. There are plenty of good reasons to cover that cam: other pieces of malware and hacks have surfaced that are able to turn cameras on – affecting Macs and PCs – without activating the camera lights to tip you off that they’re functioning.   

Another thing you can do is go into your phone and turn off FaceTime for now until the proper security update is pushed out. 

As always, for the sake of your own privacy, remember that no tech is immune to human error!

Don't take the '10-year challenge' at face value

By now everyone has seen the “10-year challenge” meme: you share a photo of yourself from a decade ago alongside another that’s recent. It’s a way to show friends how well – or how poorly – you've aged, and to share and comment on photos of others on social media. Seems like harmless fun, right? 

Maybe, but maybe not.  

The perfect data set

No one is sure where the “challenge” originated, and questions are arising about whether it’s a data mine for facial recognition software. It’s easy to see how that’s possible, because the meme incorporates the perfect data set: millions of people self-attesting that this photo is them 10 years ago, and that one is them now, attached to the same identity.  

Your face is increasingly becoming a key part of your online identity. Giving it out without securing it could come back to haunt you.  

The old notion of a photo – a moment in time, captured and shared with family and close friends in an innocuous setting – is long gone. Photos can be weaponized and used to attack your online identity, to defraud you, even to break into your devices. 

Those pics are part of your biometric data

Biometric data include your face, your thumbprint, retinal scans, and in China software has been developed that can even identify people solely by the way they walk! “Gait recognition” surveillance may (hopefully) never be part of life in the Western world, but other less obvious ways of tracking people are on the rise, such as DNA kits sold by various companies, some of whom disclose in their terms of service that by participating, you grant royalty-free, perpetual licence to your DNA to the company doing the testing. 

These DNA kits could reveal that you have a genetic disease, and if that info were ever sold to insurance companies, that could adversely impact you and your family.  

How private do we need to become?

Photo sharing is huge and it’s getting people in major trouble, from the “sextortion” of Tony Clement, to “deepfakes” that create a realistic depiction of someone from the massive volume of available photos, applying their image to videos that look scarily legitimate.  

The more images of yourself out there, the more data there is to work with, and the easier it is for your image to be weaponized against you. 

It’s probably not realistic to tell people to stop sharing photos of themselves online, but it doesn’t hurt to be skeptical and think carefully about how your participation in these things – DNA testing kits, quizzes on social media, trends like the 10-year challenge – could be used against you. 

Privacy is not dead!

If anything, privacy is more important now than ever, as tech users are realizing that the more info they give out, the more they may be compromising their identity – their whole life. Privacy requires people to be educated and empowered about the limits and failings of technology, and to act accordingly. 

Alexa, what are you doing with my data?

Alexa, what are you doing with my data?

Well, that didn’t take long. The Amazon Echo has been on the market a few short years and already unnerving stories of the smart speaker’s failings are cropping up worldwide, including in Germany, where an Amazon customer took advantage of the new EU General Data Protection Regulation (GDPR) that grants individuals access to their personal data.