Homepage

Have you been pwned?

If you’ve ever wondered how exposed you are to hacking or how vulnerable your online presence might be, now you can find out in a matter of seconds.  

Here’s what you do: go to the site haveibeenpwned.com and input your email address. Hit enter. Moments later you’ll get either an all-clear saying “Good news – no pwnage found!” or an “Oh no – pwned!” message letting you know how many breached sites that email address has appeared on.  

More spam, more attacks

“Pwn” is an old gaming slang term derived from the verb “own.” According to the Wikipedia page, “pwn” “implies domination or humiliation of a rival, primarily in the internet-based video game culture to taunt an opponent who has just been soundly defeated (e.g., ‘You just got pwned!’).” 

Troy Hunt, a respected security researcher, created the website, which lets you check whether your email and/or passwords have been compromised, and which sites your information was leaked from.   

If you have appeared in any breaches, you will inevitably be getting more spam, and even targeted criminal attacks against you. It’s a good idea to check your work email and personal email against this cool tool to see how exposed you are.  

Hunt’s password service also allows you to securely check whether your passwords are in one of these data breaches. He has compiled a data set of 551 million passwords, and if you use passwords that appear here, you should change them immediately! 

How can you secure yourself?

The site suggests three steps to better security.  

1) Protect yourself using 1Password (or another reputable password manager such as LastPass) to create and save strong passwords for each site you use. Don’t use built-in browser password storage; Google Chrome, for example, will often ask, “Do you wish to remember the password for the site?” But it’s better to use a third-party password manager. It’s more secure and more convenient.  

2) Enable two-factor authentication

3) Subscribe to notifications for any other breaches on haveibeenpwned. This will keep you in the loop and informed on the status of your accounts and passwords. 

Remember: while this site is not a catch-all fix to any vulnerabilities in your online identity, it is a useful tool that can go a long way in boosting your overall security.  

Google: violating home and public privacy

The reasons to install a home security system are obvious: you want to see what’s going on in your house when you’re not there, you want to deter would-be thieves, and in the unlikely event of a break-in, you want to be able to identify the perpetrators. And you just want to feel safe and secure.  

Last month, Google announced that users could now enable Google Assistant virtual assistant technology through their Nest cams. Instead of celebrating, though, users were irate, because Google inadvertently revealed that the cams had contained a built-in microphone the entire time. There was never any indication in the packaging, marketing materials, their website – anywhere – that a mic was part of the deal. 

 Imagine learning that the device you installed to keep you secure may have been secretly recording everything you said? Talk about a betrayal of trust! 

Google denied that the microphone had ever been a secret, but it’s tough to buy that; if it were a feature they intended you to know about, they would have bragged it up from the beginning.

Victim blaming

The Nest cams have been riddled with flaws: last month its indoor and outdoor cameras had a bug that caused the camera to behave as if someone were accessing the “live view” mode when they weren’t. There have been several cases of hackers taking control of the cams; in response, Google blamed the victim and said the fault lay with customers and their weak passwords. 

From smart homes to smart cities

Even if you don’t own any smart-home tech, you may not be safe from Google’s clutches; they’re piloting a project called Sidewalk Labs that has been plagued with privacy and ethics problems.  

Sidewalk Labs, a subsidiary of Alphabet Inc., Google’s parent company, wants to create a connected community in Toronto that will measure traffic flow, embed sensors so lights will go on and off more efficiently, track where and when people are out walking in order to plan when it’s best to clear snow, and so on. Again, instead of being thrilled by the advancement, many people are calling this project out as nothing but a big data mine. 

More information about the scale and scope of the project is emerging (they want more real estate and more money, basically) showing that Google’s aspirations for Toronto are far greater than they’d initially let on. This is in keeping with Google’s almost pathological style of hiding the big picture from the public, only releasing details in dribs and drabs. 

Refusal to do the bare minimum

Sidewalk Labs has already squandered a lot of goodwill, first by losing the support of former Ontario privacy commissioner Dr. Ann Cavoukian with their refusal to implement common-sense tech that would de-identify people from video surveillance installed in public places.  

Surveillance can increase convenience, but unless there are measures in place to protect people from the array of abuses that can arise, as we’re seeing in China, the data collection technology that goes along with surveillance is just waiting to be exploited.  

Whether it’s with Nest smart homes or Sidewalk Labs, Google needs to be clear about what they want from the consumer and the public, and be explicit in how they plan to put privacy at the forefront of all their products and projects. And we need hold them accountable for our own data privacy – if we don’t, no one will.  

Tracking your health with an app? Facebook is too

You don’t even have to be a Facebook user for the social media platform to collect data on you – and highly personal data, at that! 

If you’re using a phone app that tracks things like your menstrual cycle, heart rate, exercise habits and calories burned, chances are good that that app is sending that information along to – you guessed it – Facebook.  

Fuel for advertising

A Facebook-provided analytics tool called “App Events” lets app developers track and store user data, then send it right to Facebook, who then use it to fuel their advertising algorithms. Developers use App Events to track how and when people used their apps, and to gain insights for their own advertising purposes.  

The social media platform was caught acquiring sensitive data from Flo Period & Ovulation Tracker, and around 30 other apps so that information could be used for hyper-targeted ads. People were willingly inputting this info into their apps, but they had no idea what would happen to the data beyond the primary function of the app. 

An example: Say a woman is trying to get pregnant, so she’s tracking her periods, ovulation and sexual activity in the Flo Period app. The app sends that information to Facebook, who then hit her with ads for maternity clothing, prenatal vitamins, diapers and daycares in her area.  

The goal of most tech is to slurp up information and turn it into profit, no matter how private the data. And it doesn’t get much more private than bodily functions! 

Feigning ignorance

Facebook claims it requires apps to tell users what info is shared and forbids apps from sending intimate data. But it did nothing to stop the flow of that sensitive data.  

Given their lax attitude toward data privacy, it’s not hard to imagine Facebook selling private information to health insurers, who would pay a premium for it and even use it to decide who they’ll cover. Free health apps have already been known to give up sensitive information to insurance companies – why wouldn’t Facebook do it?  

Digital gangsters

Wall Street Journal investigation found that many of these apps didn’t disclose that they would be sharing this information with third parties, or with Facebook specifically. Shortly after the Journal story broke, New York Governor Andrew Cuomo called for further investigation into this invasion of privacy. 

This all comes on the heels of a scathing report out of the U.K. that essentially called Facebook digital gangsters who are abusing the power of their platform. And it’s not just Facebook; Google and Amazon have a scary amount of data on every one of us, which means we need to be taking this seriously.  

Data privacy should be an election issue

While the issue of data privacy is finally starting to be a high priority in the States, with investigations into breaches and tougher policies mirroring those or Europe, in Canada we’re just not there yet. We need to push for stricter privacy legislation and make it an election issue. We need to demand accountability from these data-hoarding corporations. 

Cybercriminals: Living large on the lam

Cyberattacks may seem like an ambiguous threat – happening to someone else, somewhere else. But serious cybercrime is hitting close to home, with attacks from North Korea now targeting Canadian retail banking customers.  

Security expert Christopher Porter highlighted this threat at a House of Commons meeting earlier this month. He noted that top Canadian financial institutions were exposed to state-sponsored cybertheft from North Korea just one year ago, in February 2017. 

What they want

The attack redirected people to malicious downloads that would subsequently take control of their computers, accessing their bank accounts. These criminals are funding the North Korean nuclear program through stolen money, by targeting financial institutions, companies and retail customers. These cyberattacks show a level of sophistication that was once only seen among nation states’ intelligence groups like the NSA, according to Porter. 

How they’re getting it

"Man-in-the-middle" attacks involve an attacker covertly relaying or changing the communication between two parties who believe they’re communicating directly.  

In this case the “man in the middle” hacks into your device, imitates your banking sign-on page, and lures you to enter your private information. When you’re done banking, the hacker logs on with your credentials and steals your money.  

Why they’re successful

The perpetrators of cybercrime are the same groups known for organized crime like weapons and human trafficking, drugs, et cetera. Cyber represents a booming growth industry for them.   

Cyberattacks are relatively easy to accomplish and extremely tough to police. In Canada, despite their efforts, cops can only identify a suspect in 7% of cases. Criminals are going where the police are not; so their odds of getting away with these crimes are much higher than traditional strategies.  

In addition to the anonymity cyber provides criminals, decades ago, when our telecoms structures were designed, they were done without much consideration to cybercrime. These same structures haven’t adapted as quickly as criminals have. Ahead of our outdated safety measures, criminals are even bypassing newer security methods like multi-factor identification.   

Tom Cruise and the A.I. myth

One way of staying ahead of criminals is to stop them before they have the chance to commit a crime.  

In the 2002 Sci-Fi film Minority Report, police were able to predict and arrest criminals before they offended. That movie feels less like science fiction today, considering real police units in the U.K. are now using algorithms to direct officers to patrol specific high-crime areas. Unfortunately, these areas are disproportionately over-policed as it is. 

In Canada, we’re also experimenting with artificial intelligence (A.I.) to accelerate bureaucratic processes. One well intended effort is the use of A.I. with immigration applications. However, concerns about algorithms with built in biases and inevitable abuses by authorities are being raised by this attempt to use technology to serve immigrants more effectively. 

We may be introducing more problems than we’re solving by using algorithms and A.I. to tackle complex social problems. One of the biggest myths about A.I. is that a computer removes subjectivity, and therefore can’t be biased. But the data fed into these computers are inherently flawed, because the people who’ve created them are flawed. 

How can we respond?

Protecting ourselves from cyberattacks starts with awareness. The more people become knowledgeable about their cyber risks and what simple steps they can take to reduce it, the more time our IT and security professionals will be able to dedicate to putting out the big fires. 

Amazon: now in the business of tracking babies

The demand for smart products for the home is growing, and it was only a matter of time before the purveyors of smart tech turned their attention to a booming market: babies. Enter Hatch Baby, a smart nursery company launched by Amazon’s Alexa Fund

The company was up and running in 2014, by 2016 made its way to Shark Tank, and its offerings are now among the top 100 baby products (of more than 200,000) on the Amazon marketplace. Hatch Baby sells a smart changing pad that can track your baby’s weight; for older kids, there’s a smart nightlight/sound machine. These devices are connected to an app that lets parents control them and track their kids’ interaction with them.  

Amazon and Google are known for collecting and storing way too much data on their customers, and now that’s starting literally from birth.  

If the product testimonials are to be believed, these kid-tracking gadgets are not only life-changing, but necessary. Amazon promises “peace of mind.” Make no mistake: companies such as Google and Amazon are not in the business of helping parents raise their children. They’re in the business of securing market share, killing the competition, and dominating all our time and money.  

Surveillance and censorship

If you’ve seen Black Mirror, you probably recall the “Arkangel” episode in which a woman opts to have a chip implanted into her daughter that allows the mom to track all her movements, to see everything in her daughter’s line of sight, and to pixelate all images that could be disturbing to her child. While the chip technology is at first useful for ensuring the daughter’s safety, as she gets older, the daughter rebels against the constant tracking and surveillance. The mom is addicted to spying on her daughter, and the daughter despises her for it.

It’s easy for us to predict the disastrous implications when we’re watching this fictionalized narrative, so why can’t we foresee the ill effects of real-life tracking tech such as Hatch Baby? 

Resilience versus convenience

As our lives become more convenient and efficient, we become less resilient. With Amazon and Google devouring every aspect of our lives and selling us almost everything we buy, the small- and medium-sized businesses that are the backbone of the Canadian economy suffer. We’re setting ourselves up for economic failure.  

Amazon has been caught, according to a Bloomberg report, strong-arming other home smart-tech companies into letting their devices communicate with Alexa. Alexa collects data from smart light-switches about when a light has been turned on or off, so Amazon knows when the customer is home; smart TVs report what channels customers watch; smart locks let Amazon know whether the front door is bolted.  

They see you when you’re sleeping

This means Google and Amazon know when you’re asleep, when you’re awake, when you’re home, what shows you’re watching and when, the current temperature in your living room, when you’re eating, what you’re buying – everything. They demand this data without our informed consent, then appease us with the lie that it’s all for our convenience.  

Who is it all for?

Where Hatch Baby is concerned, parents need to put themselves in their children’s shoes and ask whether their kids’ lives being tracked is really to their benefit. We need to think about whether we need it – we've gotten by without this kind of “smart” tech till now, and we can continue to do so.  

Mass transit system or tool for mass surveillance?

Here’s one for our Upper Canadian readers: Metrolinx, the Crown agency that manages public transit in Toronto and surrounding areas, has made the news again for sharing passenger data stored on Presto fare cards with law enforcement – without asking for customer consent or insisting on warrants from police.   

In 2018, there were 22 cases related to criminal investigations or suspected offences where the agency revealed card users’ information without a court order. 

Accountability is everything

This raises the question: Could Presto become a surveillance tool?  

The ease with which card data is being disclosed should be concerning. We’re a country based on the rule of law. Unless it’s a life-and-death situation requiring police to act quickly with Metrolinx, we need to prevent this type of immediate access to data. Even in an emergency, Metrolinx and police should have to thoroughly explain why the normal process of acquiring data was subverted.  

In a criminal investigation, police hate to do the paperwork involved – who wouldn’t? Especially when they have a proven track record of asking the agency to hand over the information they have. But due process is a crucial aspect of retaining the privacy rights of citizens.  

Information is power

Systems like Presto – where information is accumulated online in mass quantities and stored – can be hacked. And travel information could be very valuable for a hacker who wants to blackmail and extort their victim(s).  

Imagine a man who’s having an affair tells his wife he’s one place, but his Presto card information proves otherwise. Or an employee calls in sick to work when they were really at a job interview, and their transit data shows precisely where they went. The scariest example of this is stalking – when people flee bad relationships, the last thing they need is another layer of surveillance to combat, when their phones, cars and other tech may already be tracking them. 

As with many tech advancements, the promised convenience seems to outweigh the risk at first: with Presto cards, passengers get perks such as avoiding lineups by being able to add funds to cards online; they can simply tap a card rather than fumble in their wallet for tokens before their morning commute. The only perk they’re giving up is arguably the best one of all: anonymity! 

Proper people, processes, and technology

Metrolinx, just like many businesses or organizations offering speed and convenience, probably aren’t as mature as they need to be when it comes to handling people’s private information. They’re simply doing the best they can with the limited resources they have.  

It’s hardly just the TTC who are falling short – there have been cases thrown out of court when due process isn’t followed, or when warrants aren’t gathered to get critical evidence.  

Unless the proper people, processes and technology are in place, there’s no way to keep up with the complex issue of privacy rights. 

How much is your sensitive info worth to Facebook? About $20

Facebook has been targeting teenagers and young adults with their VPN app “Research,” for 13- to 35-year-olds, that’s part of their overall “Project Atlas,” a far-reaching effort to gain insight into everyday lives and to detect potential emerging Facebook competitors.

If users install the app on their phone, and agree to the extra-complicated terms of service, they get $20 (in gift cards), and additional $20 payments for referring friends. Meanwhile, Facebook gets almost every single piece of sensitive data transmitted through their phones – including private messages, photos, web browsing activity and more. Facebook’s level of access to personal data and activity would make intelligence agencies such as the U.S. National Security Agency envious.

The imbalance of power here is astounding. But to cash-strapped teens who don’t understand just how much they’re giving away (and let’s face it – no one could understand the legaleze in these intentionally long, complex user agreements) – it seems like easy money. 

A rebrand of a banned app

The app lets Facebook suck in all the users’ phone and web activity, much like another app called Onavo that Apple banned last June. Research is basically a rebranded version of Onavo, meaning Facebook is still flagrantly flaunting the rules and knowingly undermining their relationship with Apple. 

Why is Facebook doing this? Simple: so they can figure out which competitors to kill, which to buy, and what new features to develop next. It’s extremely profitable for Facebook to glean info such as Amazon purchase history – which they actually did ask users to screencap for them – and create an accurate portrait of purchasing habits and other user trends, so they can foresee what their next steps should be in the big picture. 

They knew to buy WhatsApp, for example, because through Onavo’s tracking they discovered that there were twice as many conversations for that age group happening on WhatsApp compared with Facebook Messenger.  Not only did they know to buy it, they had an advantage in knowing how much it was truly worth and what they should pay for WhatsApp.

Tricky tracking

Facebook is going about all this with a disturbing level of surveillance that’s normally reserved for corporate security or government agencies.  

The Research app initially gives no clue that it’s connected to Facebook; that’s also intentionally misleading, because Facebook is well aware that teenagers are leaving their platform in droves, so if they can convince teens to download a seemingly unrelated app, they still get all that valuable data. 

They also used tools provided by Apple for app-testing purposes, not for mass surveillance purposes, violating not just users’ trust, but also their technology partners and providers’ trust.

There’s no way to give truly informed consent

Facebook always positions themselves as harmless or, at worst, incompetent, but after the last two years of their repeated abuses we know that’s simply not the case. They’re saying, “You’ve got nothing to hide, so download this app, help us improve our service, and get paid for it.” But you’re giving up your privacy for an insultingly low compensation.

And, there’s a risk should Facebook’s internal security practices be as bad as its privacy practices that your highly personal information could fall into the wrong hands.

Facebook will stop at nothing to leverage their monopoly to secure their market position.  

What can you do? Don’t give in! Get Facebook and affiliated apps off your phone, petition for privacy to be upheld in all levels of government, and push for lawmakers to finally hold Facebook accountable. 

Apple loses face with FaceTime bug

Apple may value user privacy more than the other tech giants, but even they aren’t immune to issues that compromise that privacy.  

In late January, a FaceTime group chat error let users hear audio from the person at the other end before they’d picked up. In some cases, the device also broadcast video. The audio and video functions were enabled early, in other words, making for an unintentional – but still very embarrassing – mistake on Apple’s part! 

A bug in the system

Your cool fact for the day: the root of the term “bug” comes from the early days of computing; real bugs would crawl into the original hole-punch-style computers from the mid-20th century, end up squashed over a hole, and screw up the programming.  

We now use the expression “bug” to refer to any unintentional software error.  

This FaceTime mistake was introduced in a software update, and only discovered recently.

Working out a fix

Intentions mean a lot – we know, at least, that this malfunction wasn’t perpetrated by a nation state or criminal group; it’s a bug, not a deliberate hack. 

On Monday, Apple said it was working on a software patch to solve the problem. They’d disabled the group chat functionality – meaning users could still chat one-on-one and their FaceTime app would still work – and Apple promised to push out an update to Mac and iOS devices to fix the flaw. On Friday, they apologized for the error. 

Do you really need to cover your webcam?

A good way to nip this kind of privacy issue in the bud is to cover the camera on your laptop, tablet and phone, either with a quick solution like electrical tape, or with an adhesive or attachable device specifically made to cover webcams. These cheap, quick options could save you a lot of hassle in the long run and give you some peace of mind.

Of course, this type of glitch is not specific to FaceTime. There are plenty of good reasons to cover that cam: other pieces of malware and hacks have surfaced that are able to turn cameras on – affecting Macs and PCs – without activating the camera lights to tip you off that they’re functioning.   

Another thing you can do is go into your phone and turn off FaceTime for now until the proper security update is pushed out. 

As always, for the sake of your own privacy, remember that no tech is immune to human error!

Don't take the '10-year challenge' at face value

By now everyone has seen the “10-year challenge” meme: you share a photo of yourself from a decade ago alongside another that’s recent. It’s a way to show friends how well – or how poorly – you've aged, and to share and comment on photos of others on social media. Seems like harmless fun, right? 

Maybe, but maybe not.  

The perfect data set

No one is sure where the “challenge” originated, and questions are arising about whether it’s a data mine for facial recognition software. It’s easy to see how that’s possible, because the meme incorporates the perfect data set: millions of people self-attesting that this photo is them 10 years ago, and that one is them now, attached to the same identity.  

Your face is increasingly becoming a key part of your online identity. Giving it out without securing it could come back to haunt you.  

The old notion of a photo – a moment in time, captured and shared with family and close friends in an innocuous setting – is long gone. Photos can be weaponized and used to attack your online identity, to defraud you, even to break into your devices. 

Those pics are part of your biometric data

Biometric data include your face, your thumbprint, retinal scans, and in China software has been developed that can even identify people solely by the way they walk! “Gait recognition” surveillance may (hopefully) never be part of life in the Western world, but other less obvious ways of tracking people are on the rise, such as DNA kits sold by various companies, some of whom disclose in their terms of service that by participating, you grant royalty-free, perpetual licence to your DNA to the company doing the testing. 

These DNA kits could reveal that you have a genetic disease, and if that info were ever sold to insurance companies, that could adversely impact you and your family.  

How private do we need to become?

Photo sharing is huge and it’s getting people in major trouble, from the “sextortion” of Tony Clement, to “deepfakes” that create a realistic depiction of someone from the massive volume of available photos, applying their image to videos that look scarily legitimate.  

The more images of yourself out there, the more data there is to work with, and the easier it is for your image to be weaponized against you. 

It’s probably not realistic to tell people to stop sharing photos of themselves online, but it doesn’t hurt to be skeptical and think carefully about how your participation in these things – DNA testing kits, quizzes on social media, trends like the 10-year challenge – could be used against you. 

Privacy is not dead!

If anything, privacy is more important now than ever, as tech users are realizing that the more info they give out, the more they may be compromising their identity – their whole life. Privacy requires people to be educated and empowered about the limits and failings of technology, and to act accordingly. 

Alexa, what are you doing with my data?

Alexa, what are you doing with my data?

Well, that didn’t take long. The Amazon Echo has been on the market a few short years and already unnerving stories of the smart speaker’s failings are cropping up worldwide, including in Germany, where an Amazon customer took advantage of the new EU General Data Protection Regulation (GDPR) that grants individuals access to their personal data.