Mass transit system or tool for mass surveillance?

Here’s one for our Upper Canadian readers: Metrolinx, the Crown agency that manages public transit in Toronto and surrounding areas, has made the news again for sharing passenger data stored on Presto fare cards with law enforcement – without asking for customer consent or insisting on warrants from police.   

In 2018, there were 22 cases related to criminal investigations or suspected offences where the agency revealed card users’ information without a court order. 

Accountability is everything

This raises the question: Could Presto become a surveillance tool?  

The ease with which card data is being disclosed should be concerning. We’re a country based on the rule of law. Unless it’s a life-and-death situation requiring police to act quickly with Metrolinx, we need to prevent this type of immediate access to data. Even in an emergency, Metrolinx and police should have to thoroughly explain why the normal process of acquiring data was subverted.  

In a criminal investigation, police hate to do the paperwork involved – who wouldn’t? Especially when they have a proven track record of asking the agency to hand over the information they have. But due process is a crucial aspect of retaining the privacy rights of citizens.  

Information is power

Systems like Presto – where information is accumulated online in mass quantities and stored – can be hacked. And travel information could be very valuable for a hacker who wants to blackmail and extort their victim(s).  

Imagine a man who’s having an affair tells his wife he’s one place, but his Presto card information proves otherwise. Or an employee calls in sick to work when they were really at a job interview, and their transit data shows precisely where they went. The scariest example of this is stalking – when people flee bad relationships, the last thing they need is another layer of surveillance to combat, when their phones, cars and other tech may already be tracking them. 

As with many tech advancements, the promised convenience seems to outweigh the risk at first: with Presto cards, passengers get perks such as avoiding lineups by being able to add funds to cards online; they can simply tap a card rather than fumble in their wallet for tokens before their morning commute. The only perk they’re giving up is arguably the best one of all: anonymity! 

Proper people, processes, and technology

Metrolinx, just like many businesses or organizations offering speed and convenience, probably aren’t as mature as they need to be when it comes to handling people’s private information. They’re simply doing the best they can with the limited resources they have.  

It’s hardly just the TTC who are falling short – there have been cases thrown out of court when due process isn’t followed, or when warrants aren’t gathered to get critical evidence.  

Unless the proper people, processes and technology are in place, there’s no way to keep up with the complex issue of privacy rights. 

Your smart speaker could be recording every stupid thing you say in your home!

Smart speakers are a dime a dozen these days, and one of the reasons they’re so cheap and accessible is that users give up their privacy simply by setting them up in their home. Convenience isn’t free!

An infamous incident: an Amazon customer in Germany requested access to his Alexa voice files and not only got his own recordings, but also 1,700 of those of another customer!

Check out our video below inspired by that data breach!

If you’re using any type of digital assistant, there are a few things you should do:

  • Don’t put smart devices in bedrooms or bathrooms. For obvious reasons.

  • Ideally, you should put your smart devices on a separate Wi-Fi network (i.e. your guest network).

  • Turn on auto-updates for all smart devices.

  • Secure your Amazon and Google accounts with multi-factor authentication.

  • Set a monthly calendar reminder to delete your old audio recordings from Amazon Echo and Google Home. Apple does this automatically for you with HomePod.

In general, think carefully about any connected device you put in your home.

How much is your sensitive info worth to Facebook? About $20

Facebook has been targeting teenagers and young adults with their VPN app “Research,” for 13- to 35-year-olds, that’s part of their overall “Project Atlas,” a far-reaching effort to gain insight into everyday lives and to detect potential emerging Facebook competitors.

If users install the app on their phone, and agree to the extra-complicated terms of service, they get $20 (in gift cards), and additional $20 payments for referring friends. Meanwhile, Facebook gets almost every single piece of sensitive data transmitted through their phones – including private messages, photos, web browsing activity and more. Facebook’s level of access to personal data and activity would make intelligence agencies such as the U.S. National Security Agency envious.

The imbalance of power here is astounding. But to cash-strapped teens who don’t understand just how much they’re giving away (and let’s face it – no one could understand the legaleze in these intentionally long, complex user agreements) – it seems like easy money. 

A rebrand of a banned app

The app lets Facebook suck in all the users’ phone and web activity, much like another app called Onavo that Apple banned last June. Research is basically a rebranded version of Onavo, meaning Facebook is still flagrantly flaunting the rules and knowingly undermining their relationship with Apple. 

Why is Facebook doing this? Simple: so they can figure out which competitors to kill, which to buy, and what new features to develop next. It’s extremely profitable for Facebook to glean info such as Amazon purchase history – which they actually did ask users to screencap for them – and create an accurate portrait of purchasing habits and other user trends, so they can foresee what their next steps should be in the big picture. 

They knew to buy WhatsApp, for example, because through Onavo’s tracking they discovered that there were twice as many conversations for that age group happening on WhatsApp compared with Facebook Messenger.  Not only did they know to buy it, they had an advantage in knowing how much it was truly worth and what they should pay for WhatsApp.

Tricky tracking

Facebook is going about all this with a disturbing level of surveillance that’s normally reserved for corporate security or government agencies.  

The Research app initially gives no clue that it’s connected to Facebook; that’s also intentionally misleading, because Facebook is well aware that teenagers are leaving their platform in droves, so if they can convince teens to download a seemingly unrelated app, they still get all that valuable data. 

They also used tools provided by Apple for app-testing purposes, not for mass surveillance purposes, violating not just users’ trust, but also their technology partners and providers’ trust.

There’s no way to give truly informed consent

Facebook always positions themselves as harmless or, at worst, incompetent, but after the last two years of their repeated abuses we know that’s simply not the case. They’re saying, “You’ve got nothing to hide, so download this app, help us improve our service, and get paid for it.” But you’re giving up your privacy for an insultingly low compensation.

And, there’s a risk should Facebook’s internal security practices be as bad as its privacy practices that your highly personal information could fall into the wrong hands.

Facebook will stop at nothing to leverage their monopoly to secure their market position.  

What can you do? Don’t give in! Get Facebook and affiliated apps off your phone, petition for privacy to be upheld in all levels of government, and push for lawmakers to finally hold Facebook accountable. 

Apple loses face with FaceTime bug

Apple may value user privacy more than the other tech giants, but even they aren’t immune to issues that compromise that privacy.  

In late January, a FaceTime group chat error let users hear audio from the person at the other end before they’d picked up. In some cases, the device also broadcast video. The audio and video functions were enabled early, in other words, making for an unintentional – but still very embarrassing – mistake on Apple’s part! 

A bug in the system

Your cool fact for the day: the root of the term “bug” comes from the early days of computing; real bugs would crawl into the original hole-punch-style computers from the mid-20th century, end up squashed over a hole, and screw up the programming.  

We now use the expression “bug” to refer to any unintentional software error.  

This FaceTime mistake was introduced in a software update, and only discovered recently.

Working out a fix

Intentions mean a lot – we know, at least, that this malfunction wasn’t perpetrated by a nation state or criminal group; it’s a bug, not a deliberate hack. 

On Monday, Apple said it was working on a software patch to solve the problem. They’d disabled the group chat functionality – meaning users could still chat one-on-one and their FaceTime app would still work – and Apple promised to push out an update to Mac and iOS devices to fix the flaw. On Friday, they apologized for the error. 

Do you really need to cover your webcam?

A good way to nip this kind of privacy issue in the bud is to cover the camera on your laptop, tablet and phone, either with a quick solution like electrical tape, or with an adhesive or attachable device specifically made to cover webcams. These cheap, quick options could save you a lot of hassle in the long run and give you some peace of mind.

Of course, this type of glitch is not specific to FaceTime. There are plenty of good reasons to cover that cam: other pieces of malware and hacks have surfaced that are able to turn cameras on – affecting Macs and PCs – without activating the camera lights to tip you off that they’re functioning.   

Another thing you can do is go into your phone and turn off FaceTime for now until the proper security update is pushed out. 

As always, for the sake of your own privacy, remember that no tech is immune to human error!

Don't take the '10-year challenge' at face value

By now everyone has seen the “10-year challenge” meme: you share a photo of yourself from a decade ago alongside another that’s recent. It’s a way to show friends how well – or how poorly – you've aged, and to share and comment on photos of others on social media. Seems like harmless fun, right? 

Maybe, but maybe not.  

The perfect data set

No one is sure where the “challenge” originated, and questions are arising about whether it’s a data mine for facial recognition software. It’s easy to see how that’s possible, because the meme incorporates the perfect data set: millions of people self-attesting that this photo is them 10 years ago, and that one is them now, attached to the same identity.  

Your face is increasingly becoming a key part of your online identity. Giving it out without securing it could come back to haunt you.  

The old notion of a photo – a moment in time, captured and shared with family and close friends in an innocuous setting – is long gone. Photos can be weaponized and used to attack your online identity, to defraud you, even to break into your devices. 

Those pics are part of your biometric data

Biometric data include your face, your thumbprint, retinal scans, and in China software has been developed that can even identify people solely by the way they walk! “Gait recognition” surveillance may (hopefully) never be part of life in the Western world, but other less obvious ways of tracking people are on the rise, such as DNA kits sold by various companies, some of whom disclose in their terms of service that by participating, you grant royalty-free, perpetual licence to your DNA to the company doing the testing. 

These DNA kits could reveal that you have a genetic disease, and if that info were ever sold to insurance companies, that could adversely impact you and your family.  

How private do we need to become?

Photo sharing is huge and it’s getting people in major trouble, from the “sextortion” of Tony Clement, to “deepfakes” that create a realistic depiction of someone from the massive volume of available photos, applying their image to videos that look scarily legitimate.  

The more images of yourself out there, the more data there is to work with, and the easier it is for your image to be weaponized against you. 

It’s probably not realistic to tell people to stop sharing photos of themselves online, but it doesn’t hurt to be skeptical and think carefully about how your participation in these things – DNA testing kits, quizzes on social media, trends like the 10-year challenge – could be used against you. 

Privacy is not dead!

If anything, privacy is more important now than ever, as tech users are realizing that the more info they give out, the more they may be compromising their identity – their whole life. Privacy requires people to be educated and empowered about the limits and failings of technology, and to act accordingly. 

A parent's guide to protecting your kids online

In recent months, a handful of New Brunswick families found out the hard way that if kids have internet access, they also have access to all the bad things that come along with the online world. Four children between the ages of eight and twelve voluntarily sent nude images or videos of themselves that were later discovered by RCMP on various unspecified free websites.   

Perhaps the only positive outcome from this story is that because it hits so close to home, it serves as a much-needed wake-up call to other parents, who will often say, “My kid wouldn’t do that” — but we’re learning that, in 2019, you may know your child, but if you don’t monitor their internet activities, you can never really be sure what they’re up to.  

Prevention, not punishment

This stuff is scary, but there are effective ways of protecting kids from the darker side of this age of connectivity. Rather than punishing negative behaviour after the fact, prevent it. 

How, you ask? Two approaches work. 

First: Maintain an open dialogue with children about what’s acceptable online. Make yourself out to be an ally, not an enemy, so that kids feel comfortable bringing issues to you before they even begin.  

Ask kids who they’re talking to online, explain to them that adults shouldn’t be pursuing relationships with kids, talk about healthy versus unhealthy relationships, about ways to get out of uncomfortable situations online, and talk openly about what kinds of thing you do online so children know how the internet should be used.  

Second: It doesn’t get much more tangible than physically removing devices from kids’ bedrooms —especially anything with a webcam. They don’t need it!  

Prevent your child from seeing things they shouldn’t online by changing some basic security settings — monitor the settings of the device itself, as well as your ISP settings.  

Take safety a step further by plugging a cool gadget like CleanRouter or Circle into your router. These control what all other devices are able to do while on the Wi-Fi network at home: they can filter out age-inappropriate content, set internet curfews, and generally monitor what kids are doing online. 

It can happen to you, but it doesn’t have to

Studies show that 60% of people under the age of 30 have created an intimate image of themselves — by the time a pic is snapped it can make its way out of your hands. If adults can fall victim to this kind of thing, kids obviously can too.  

It’s important to remind your children (and yourself!) of the legal implications of online activities — sharing intimate images without consent is illegal.  

A good guideline: Tell your kids, “Don’t do anything online that you wouldn’t do at the mall.” 

Alexa, what are you doing with my data?

Alexa, what are you doing with my data?

Well, that didn’t take long. The Amazon Echo has been on the market a few short years and already unnerving stories of the smart speaker’s failings are cropping up worldwide, including in Germany, where an Amazon customer took advantage of the new EU General Data Protection Regulation (GDPR) that grants individuals access to their personal data.