Top News

Five ways your organization can reduce burnout across your IT team

Beauceron Security ‘s mission is to empower people.

When we do that well, people help their organizations proactively reduce their cyber risk while also improving their ability to respond and recover from cyber incidents.  

Part of that mission involves helping people manage ever-increasing workloads and corresponding stress.

Competing priorities, constant change and financial constraints can create stress in the workplace. When left unaddressed, burnout — long-term, unsolvable job stress — can take over, and that’s bad news for your people and your bottom line.  

Information technology (IT) professionals are no strangers to workplace stress. Small teams of experts are facing increased security risks. In that context, it’s easy to see why so many organizations around the world struggle to build and maintain traditional security awareness programs - they just take too much time in an already too-busy workday.

That’s why we’ve designed a platform that leverages the best aspects of technology to do what computers do best - automate routine tasks and calculate data into meaningful metrics - while letting people focus on what they do best - connecting with other people.

When an organization becomes human-centric, it focuses on connecting and empowering its people and becomes more proactive, reducing the number of incidents and reactive issues teams have to deal with.

That results in less stress for leaders and employees.

Here are some of our tips on how to move to a human-centric approach.

1. Recognize it’s an issue

You can’t solve a problem until you acknowledge it. A  2019 study  that delved into Chief Information Security Officer (CISO) stress levels found that, across 408 CISOs in the United Kingdom and United States, 91 per cent reported to suffer from moderate or high levels of stress. In Canada, the inability to unplug after work hours is reaching pandemic proportions.   

Putting in place the right plan, maximizing the effectiveness of your human and technology resources and prioritizing risk areas are all ways to manage security stress.

When we designed the Beauceron platform, we looked for ways to help security leaders do all of those things through our powerful dashboards, metrics and through engaging every one in an organization to play a greater role in security.

2. Educate and empower your entire team

An educated team throughout your organization will stop security threats before they escalate to your IT department. Beauceron’s library of multilingual courses teaches employees about the important role they play in protecting their organizations.

Employees learn how to identify and report potential attacks, such as phishing e-mails. They also learn steps they can take to protect themselves including account hygiene practices such as using multi-factor authentication and password managers.

If a would-be threat never has the chance to materialize, the potential stressors on already overworked IT professionals can be minimized.  

3. Determine where your risks are

Many CISOs struggle to keep up with ever-changing risks. This can make it tough to pinpoint and address problems.

Beauceron identifies the risky people in your organization and helps them overcome weak points in knowledge and training to better the company’s overall risk score. Assessments are visual and easy-to-understand, helping high-risk employees change their behaviour quickly.  

Beauceron's pioneering approach goes far beyond employee training.

It’s unique scoring system and risk advisor feature helps identify risks not just in people, but in culture, process and technology, providing the world’s most comprehensive human-centric approach to managing cyber risk.

4. Rewarding and recognizing employees

The Beauceron platform comes with built-in rewards and a gamification system designed to get everyone engaged in managing their cyber risk. When education is gamified, people are more motivated to learn, their risk scores are lowered — and your stress is reduced!

Of course, a technology can only do so much. When you’re not spending time doing routine, repetitive tasks, you have time to think about additional proactive ways to help your team.

At Beauceron, we leverage our own technology and others that enable automation so that we can focus on additional ways to reward and recognize our team. That includes professional development opportunities and implementing improved benefits programs such as employee assistance programs (EAPs) that provide counselling and advice on legal, financial and mental health matters.

5. Promote flexibility and fun

 Recognize that individuals within your company have distinct personalities and need different tools in order to succeed.

Some may do their best work remotely, while others need more face-time and collaboration with co-workers.

Some may feel recharged after playing with a furry friend, (did we mention we’re supporting a “Canine Comfort Zone” run by St. John Ambulance?) Therapy dogs are on-site at Atlantic Security Conference in Halifax this month! Show your employees that their uniqueness is valued, and they’ll work harder for you.  

Stress is contagious.

If employees have their needs met, they’ll be more productive and won’t be passing stress along to the higher-ups whose jobs are demanding enough as is.   

Let Beauceron help you educate and empower your team — and reduce stress and burnout!  

Visit our booth at Atlantic Security Conference on April 24 and 25 or reach out to our team to learn more: info@beauceronsecurity.com or 1-877-516-9245. 

 

Seven reasons to start using a password manager today

1) You aren’t alone

If you’re not sure what a password manager is, you’re not alone. And if you’re familiar with password managers but haven’t gotten around to using one, unfortunately you’re in the majority there, too.  

Good news — The Pack Has Your Back. Here’s the rundown! 

2) It’s easier than you think

Think of it as a diary where you’ve written all your secrets. But unlike any diary you kept as a kid, this one has a nearly impenetrable lock, and only you hold the key. In this case, the key is a strong, secure “master password.”  

Most people have weak passwords and use the same passwords on multiple sites and services. (And no, using the same password with a “1” after it does NOT count as a new password!) A password manager does the dirty work for you by generating random, strong passwords for all your logins, and storing them in one place that’s easy for you to access.

3) Less stuff to remember

With a password manager, you only have to remember that one master password. Period. Without a password manager, you have to remember dozens for all of your online accounts and services: phone and internet services, social media pages, banking sites, work and personal email accounts — everything these days requires a password!   

4) We’ve narrowed down the choices

LastPass is widely trusted and offers its best features — like a secure and searchable password “vault” where you can store all passwords, access on all devices, multi-factor authentication, and secure “notes” for files and information beyond just your passwords — for free.  

Other good options include 1Password, Dashlane or Keeper.

Some are free, some come with a small fee. Do your research and see which one best suits your needs. 

5) It’s safer than what you’re doing now

The obvious question people have about password managers is: what if that one master password gets hacked? Then the hacker would have access to all my online services and life as I know it would come to an end!   

Of course no security measure online or in real life is 100% infallible, but your “last password ever” is highly secure. It’s long, it’s complex, it’s got letters, numbers, and other characters that would be almost impossible to crack.   

It’s a lot safer than writing them down on a piece of paper or logging them away in a Google Doc, right? A password manager offers the best combination of security and convenience.

6) Who doesn’t like a good story?

What if I forget my master password? How to beat it: make your password into a story — a memorable phrase or a catchy song lyric.   

Many people don’t realize that a longer password is tougher to crack than a random one. So, for example (don’t use this one!) the password “afd%#T”, though complex and involving symbols as well as upper- and lower-case characters, would be easier to hack than something that tells a story, like “mydog8theblackcat@midnighT.” There are recognizable words in the second one, but it’s longer and therefore harder to crack.   

Make it personal to you.  

7) It’s free and quick

Go to LastPass.com (if that’s the one you choose), click the “Get LastPass Free” button, and enter your email, the master password, and an optional reminder. That’s the basic version. You can add services such as a GB of encrypted file storage and priority tech support if you pay a minor monthly fee.   

Then you just install the extension in your browser — it'll walk you through it, don’t worry — in order to capture and store passwords into its vault as you go about life online.   

It takes seconds. Okay, maybe a minute. But that’s really it!
   

If you want to learn more about how you can reduce your cyber risk at home and at work, contact Beauceron Security to learn more! info@beauceronsecurity.com 

In wake of scandal and tragedy, Facebook privacy crackdown needed

It’s been a year – long enough to have forgotten the details of that Cambridge Analytica story that was all over the news last March.  

A refresher: In early 2018, Canadian-born Christopher Wylie went public with allegations that the British consulting firm Cambridge Analytica harvested private information from more than 50 million Facebook users, and shaped that data into social media strategies to support Trump’s 2016 presidential campaign. The scandal was among the first privacy issues involving Facebook, but it certainly hasn’t been the last. 

A+ for promises, D- for action

Though we have seen some efforts from Facebook to promote transparency – such as a new app to be rolled out in June that will show who paid for political ads and whom they’re targeting – Facebook is well known for making big promises about user privacy and keeping none of them. Remember when they promised a “delete your history” button in May 2018, after the backlash from Cambridge Analytica? It’s still nowhere to be seen. And that lack of follow-through is oh-so typical of Facebook. 

A wasted year

In the last year, legislators in the States have at least started to have serious conversations about what a national privacy law might look like. The American focus is on trying to rein in the power of big tech. But fast-forward 12 months and Canadian politicians have failed to create anything resembling a national data strategy. Probably because they’re more focused on winning the upcoming election than on protecting citizens’ privacy.  

What politicians should do is take Europe’s General Data Protection Regulation and Canadianize it, effectively cracking down on rule-breakers like Facebook with major fines that would have a real impact on their practices.  

Tragedy broadcast on social media

A horrific tragedy unfolded in New Zealand last week, where a terrorist attacked a mosque in Christchurch. Because Facebook is still basically a free-for-all of information dissemination, videos of the deadly shooting were live-streamed millions of times – almost instantly – on social media.  

Once digital data is created and replicated, it’s nearly impossible to control; people have created more data in the last couple of years than in all human history, and criminals are swimming in a sea of personal information that can be easily exploited.  

Who’s accountable?

New Zealand internet service providers actually blocked areas of the internet that continued to host these reprehensible materials. This was one of the most aggressive actions taken by ISPs worldwide, and it raises some thought-provoking questions regarding who should be accountable for data that’s shared online: the platform, or the internet service providers, or solely the individuals sharing it? Is there such a thing as regulated free speech? 

And while we’re on the topic: Is it really necessary for every human being to have the capability to instantly broadcast anything with zero vetting? Facebook should restrict this live-streaming capability to verified news media and individuals, so this kind of thing can’t happen in the future. 

An encouraging reaction

It was heartening to see the numbers of people across the world who refused to watch or share these violent images, in a sort of moral protest. If we really want change, though, we should be pushing our legislators to create laws that crack down on big firms that handle and distribute data. 

Tracking your health with an app? Facebook is too

You don’t even have to be a Facebook user for the social media platform to collect data on you – and highly personal data, at that! 

If you’re using a phone app that tracks things like your menstrual cycle, heart rate, exercise habits and calories burned, chances are good that that app is sending that information along to – you guessed it – Facebook.  

Fuel for advertising

A Facebook-provided analytics tool called “App Events” lets app developers track and store user data, then send it right to Facebook, who then use it to fuel their advertising algorithms. Developers use App Events to track how and when people used their apps, and to gain insights for their own advertising purposes.  

The social media platform was caught acquiring sensitive data from Flo Period & Ovulation Tracker, and around 30 other apps so that information could be used for hyper-targeted ads. People were willingly inputting this info into their apps, but they had no idea what would happen to the data beyond the primary function of the app. 

An example: Say a woman is trying to get pregnant, so she’s tracking her periods, ovulation and sexual activity in the Flo Period app. The app sends that information to Facebook, who then hit her with ads for maternity clothing, prenatal vitamins, diapers and daycares in her area.  

The goal of most tech is to slurp up information and turn it into profit, no matter how private the data. And it doesn’t get much more private than bodily functions! 

Feigning ignorance

Facebook claims it requires apps to tell users what info is shared and forbids apps from sending intimate data. But it did nothing to stop the flow of that sensitive data.  

Given their lax attitude toward data privacy, it’s not hard to imagine Facebook selling private information to health insurers, who would pay a premium for it and even use it to decide who they’ll cover. Free health apps have already been known to give up sensitive information to insurance companies – why wouldn’t Facebook do it?  

Digital gangsters

Wall Street Journal investigation found that many of these apps didn’t disclose that they would be sharing this information with third parties, or with Facebook specifically. Shortly after the Journal story broke, New York Governor Andrew Cuomo called for further investigation into this invasion of privacy. 

This all comes on the heels of a scathing report out of the U.K. that essentially called Facebook digital gangsters who are abusing the power of their platform. And it’s not just Facebook; Google and Amazon have a scary amount of data on every one of us, which means we need to be taking this seriously.  

Data privacy should be an election issue

While the issue of data privacy is finally starting to be a high priority in the States, with investigations into breaches and tougher policies mirroring those or Europe, in Canada we’re just not there yet. We need to push for stricter privacy legislation and make it an election issue. We need to demand accountability from these data-hoarding corporations. 

Cybercriminals: Living large on the lam

Cyberattacks may seem like an ambiguous threat – happening to someone else, somewhere else. But serious cybercrime is hitting close to home, with attacks from North Korea now targeting Canadian retail banking customers.  

Security expert Christopher Porter highlighted this threat at a House of Commons meeting earlier this month. He noted that top Canadian financial institutions were exposed to state-sponsored cybertheft from North Korea just one year ago, in February 2017. 

What they want

The attack redirected people to malicious downloads that would subsequently take control of their computers, accessing their bank accounts. These criminals are funding the North Korean nuclear program through stolen money, by targeting financial institutions, companies and retail customers. These cyberattacks show a level of sophistication that was once only seen among nation states’ intelligence groups like the NSA, according to Porter. 

How they’re getting it

"Man-in-the-middle" attacks involve an attacker covertly relaying or changing the communication between two parties who believe they’re communicating directly.  

In this case the “man in the middle” hacks into your device, imitates your banking sign-on page, and lures you to enter your private information. When you’re done banking, the hacker logs on with your credentials and steals your money.  

Why they’re successful

The perpetrators of cybercrime are the same groups known for organized crime like weapons and human trafficking, drugs, et cetera. Cyber represents a booming growth industry for them.   

Cyberattacks are relatively easy to accomplish and extremely tough to police. In Canada, despite their efforts, cops can only identify a suspect in 7% of cases. Criminals are going where the police are not; so their odds of getting away with these crimes are much higher than traditional strategies.  

In addition to the anonymity cyber provides criminals, decades ago, when our telecoms structures were designed, they were done without much consideration to cybercrime. These same structures haven’t adapted as quickly as criminals have. Ahead of our outdated safety measures, criminals are even bypassing newer security methods like multi-factor identification.   

Tom Cruise and the A.I. myth

One way of staying ahead of criminals is to stop them before they have the chance to commit a crime.  

In the 2002 Sci-Fi film Minority Report, police were able to predict and arrest criminals before they offended. That movie feels less like science fiction today, considering real police units in the U.K. are now using algorithms to direct officers to patrol specific high-crime areas. Unfortunately, these areas are disproportionately over-policed as it is. 

In Canada, we’re also experimenting with artificial intelligence (A.I.) to accelerate bureaucratic processes. One well intended effort is the use of A.I. with immigration applications. However, concerns about algorithms with built in biases and inevitable abuses by authorities are being raised by this attempt to use technology to serve immigrants more effectively. 

We may be introducing more problems than we’re solving by using algorithms and A.I. to tackle complex social problems. One of the biggest myths about A.I. is that a computer removes subjectivity, and therefore can’t be biased. But the data fed into these computers are inherently flawed, because the people who’ve created them are flawed. 

How can we respond?

Protecting ourselves from cyberattacks starts with awareness. The more people become knowledgeable about their cyber risks and what simple steps they can take to reduce it, the more time our IT and security professionals will be able to dedicate to putting out the big fires. 

Amazon: now in the business of tracking babies

The demand for smart products for the home is growing, and it was only a matter of time before the purveyors of smart tech turned their attention to a booming market: babies. Enter Hatch Baby, a smart nursery company launched by Amazon’s Alexa Fund

The company was up and running in 2014, by 2016 made its way to Shark Tank, and its offerings are now among the top 100 baby products (of more than 200,000) on the Amazon marketplace. Hatch Baby sells a smart changing pad that can track your baby’s weight; for older kids, there’s a smart nightlight/sound machine. These devices are connected to an app that lets parents control them and track their kids’ interaction with them.  

Amazon and Google are known for collecting and storing way too much data on their customers, and now that’s starting literally from birth.  

If the product testimonials are to be believed, these kid-tracking gadgets are not only life-changing, but necessary. Amazon promises “peace of mind.” Make no mistake: companies such as Google and Amazon are not in the business of helping parents raise their children. They’re in the business of securing market share, killing the competition, and dominating all our time and money.  

Surveillance and censorship

If you’ve seen Black Mirror, you probably recall the “Arkangel” episode in which a woman opts to have a chip implanted into her daughter that allows the mom to track all her movements, to see everything in her daughter’s line of sight, and to pixelate all images that could be disturbing to her child. While the chip technology is at first useful for ensuring the daughter’s safety, as she gets older, the daughter rebels against the constant tracking and surveillance. The mom is addicted to spying on her daughter, and the daughter despises her for it.

It’s easy for us to predict the disastrous implications when we’re watching this fictionalized narrative, so why can’t we foresee the ill effects of real-life tracking tech such as Hatch Baby? 

Resilience versus convenience

As our lives become more convenient and efficient, we become less resilient. With Amazon and Google devouring every aspect of our lives and selling us almost everything we buy, the small- and medium-sized businesses that are the backbone of the Canadian economy suffer. We’re setting ourselves up for economic failure.  

Amazon has been caught, according to a Bloomberg report, strong-arming other home smart-tech companies into letting their devices communicate with Alexa. Alexa collects data from smart light-switches about when a light has been turned on or off, so Amazon knows when the customer is home; smart TVs report what channels customers watch; smart locks let Amazon know whether the front door is bolted.  

They see you when you’re sleeping

This means Google and Amazon know when you’re asleep, when you’re awake, when you’re home, what shows you’re watching and when, the current temperature in your living room, when you’re eating, what you’re buying – everything. They demand this data without our informed consent, then appease us with the lie that it’s all for our convenience.  

Who is it all for?

Where Hatch Baby is concerned, parents need to put themselves in their children’s shoes and ask whether their kids’ lives being tracked is really to their benefit. We need to think about whether we need it – we've gotten by without this kind of “smart” tech till now, and we can continue to do so.  

Mass transit system or tool for mass surveillance?

Here’s one for our Upper Canadian readers: Metrolinx, the Crown agency that manages public transit in Toronto and surrounding areas, has made the news again for sharing passenger data stored on Presto fare cards with law enforcement – without asking for customer consent or insisting on warrants from police.   

In 2018, there were 22 cases related to criminal investigations or suspected offences where the agency revealed card users’ information without a court order. 

Accountability is everything

This raises the question: Could Presto become a surveillance tool?  

The ease with which card data is being disclosed should be concerning. We’re a country based on the rule of law. Unless it’s a life-and-death situation requiring police to act quickly with Metrolinx, we need to prevent this type of immediate access to data. Even in an emergency, Metrolinx and police should have to thoroughly explain why the normal process of acquiring data was subverted.  

In a criminal investigation, police hate to do the paperwork involved – who wouldn’t? Especially when they have a proven track record of asking the agency to hand over the information they have. But due process is a crucial aspect of retaining the privacy rights of citizens.  

Information is power

Systems like Presto – where information is accumulated online in mass quantities and stored – can be hacked. And travel information could be very valuable for a hacker who wants to blackmail and extort their victim(s).  

Imagine a man who’s having an affair tells his wife he’s one place, but his Presto card information proves otherwise. Or an employee calls in sick to work when they were really at a job interview, and their transit data shows precisely where they went. The scariest example of this is stalking – when people flee bad relationships, the last thing they need is another layer of surveillance to combat, when their phones, cars and other tech may already be tracking them. 

As with many tech advancements, the promised convenience seems to outweigh the risk at first: with Presto cards, passengers get perks such as avoiding lineups by being able to add funds to cards online; they can simply tap a card rather than fumble in their wallet for tokens before their morning commute. The only perk they’re giving up is arguably the best one of all: anonymity! 

Proper people, processes, and technology

Metrolinx, just like many businesses or organizations offering speed and convenience, probably aren’t as mature as they need to be when it comes to handling people’s private information. They’re simply doing the best they can with the limited resources they have.  

It’s hardly just the TTC who are falling short – there have been cases thrown out of court when due process isn’t followed, or when warrants aren’t gathered to get critical evidence.  

Unless the proper people, processes and technology are in place, there’s no way to keep up with the complex issue of privacy rights. 

How much is your sensitive info worth to Facebook? About $20

Facebook has been targeting teenagers and young adults with their VPN app “Research,” for 13- to 35-year-olds, that’s part of their overall “Project Atlas,” a far-reaching effort to gain insight into everyday lives and to detect potential emerging Facebook competitors.

If users install the app on their phone, and agree to the extra-complicated terms of service, they get $20 (in gift cards), and additional $20 payments for referring friends. Meanwhile, Facebook gets almost every single piece of sensitive data transmitted through their phones – including private messages, photos, web browsing activity and more. Facebook’s level of access to personal data and activity would make intelligence agencies such as the U.S. National Security Agency envious.

The imbalance of power here is astounding. But to cash-strapped teens who don’t understand just how much they’re giving away (and let’s face it – no one could understand the legaleze in these intentionally long, complex user agreements) – it seems like easy money. 

A rebrand of a banned app

The app lets Facebook suck in all the users’ phone and web activity, much like another app called Onavo that Apple banned last June. Research is basically a rebranded version of Onavo, meaning Facebook is still flagrantly flaunting the rules and knowingly undermining their relationship with Apple. 

Why is Facebook doing this? Simple: so they can figure out which competitors to kill, which to buy, and what new features to develop next. It’s extremely profitable for Facebook to glean info such as Amazon purchase history – which they actually did ask users to screencap for them – and create an accurate portrait of purchasing habits and other user trends, so they can foresee what their next steps should be in the big picture. 

They knew to buy WhatsApp, for example, because through Onavo’s tracking they discovered that there were twice as many conversations for that age group happening on WhatsApp compared with Facebook Messenger.  Not only did they know to buy it, they had an advantage in knowing how much it was truly worth and what they should pay for WhatsApp.

Tricky tracking

Facebook is going about all this with a disturbing level of surveillance that’s normally reserved for corporate security or government agencies.  

The Research app initially gives no clue that it’s connected to Facebook; that’s also intentionally misleading, because Facebook is well aware that teenagers are leaving their platform in droves, so if they can convince teens to download a seemingly unrelated app, they still get all that valuable data. 

They also used tools provided by Apple for app-testing purposes, not for mass surveillance purposes, violating not just users’ trust, but also their technology partners and providers’ trust.

There’s no way to give truly informed consent

Facebook always positions themselves as harmless or, at worst, incompetent, but after the last two years of their repeated abuses we know that’s simply not the case. They’re saying, “You’ve got nothing to hide, so download this app, help us improve our service, and get paid for it.” But you’re giving up your privacy for an insultingly low compensation.

And, there’s a risk should Facebook’s internal security practices be as bad as its privacy practices that your highly personal information could fall into the wrong hands.

Facebook will stop at nothing to leverage their monopoly to secure their market position.  

What can you do? Don’t give in! Get Facebook and affiliated apps off your phone, petition for privacy to be upheld in all levels of government, and push for lawmakers to finally hold Facebook accountable. 

Apple loses face with FaceTime bug

Apple may value user privacy more than the other tech giants, but even they aren’t immune to issues that compromise that privacy.  

In late January, a FaceTime group chat error let users hear audio from the person at the other end before they’d picked up. In some cases, the device also broadcast video. The audio and video functions were enabled early, in other words, making for an unintentional – but still very embarrassing – mistake on Apple’s part! 

A bug in the system

Your cool fact for the day: the root of the term “bug” comes from the early days of computing; real bugs would crawl into the original hole-punch-style computers from the mid-20th century, end up squashed over a hole, and screw up the programming.  

We now use the expression “bug” to refer to any unintentional software error.  

This FaceTime mistake was introduced in a software update, and only discovered recently.

Working out a fix

Intentions mean a lot – we know, at least, that this malfunction wasn’t perpetrated by a nation state or criminal group; it’s a bug, not a deliberate hack. 

On Monday, Apple said it was working on a software patch to solve the problem. They’d disabled the group chat functionality – meaning users could still chat one-on-one and their FaceTime app would still work – and Apple promised to push out an update to Mac and iOS devices to fix the flaw. On Friday, they apologized for the error. 

Do you really need to cover your webcam?

A good way to nip this kind of privacy issue in the bud is to cover the camera on your laptop, tablet and phone, either with a quick solution like electrical tape, or with an adhesive or attachable device specifically made to cover webcams. These cheap, quick options could save you a lot of hassle in the long run and give you some peace of mind.

Of course, this type of glitch is not specific to FaceTime. There are plenty of good reasons to cover that cam: other pieces of malware and hacks have surfaced that are able to turn cameras on – affecting Macs and PCs – without activating the camera lights to tip you off that they’re functioning.   

Another thing you can do is go into your phone and turn off FaceTime for now until the proper security update is pushed out. 

As always, for the sake of your own privacy, remember that no tech is immune to human error!

Don't take the '10-year challenge' at face value

By now everyone has seen the “10-year challenge” meme: you share a photo of yourself from a decade ago alongside another that’s recent. It’s a way to show friends how well – or how poorly – you've aged, and to share and comment on photos of others on social media. Seems like harmless fun, right? 

Maybe, but maybe not.  

The perfect data set

No one is sure where the “challenge” originated, and questions are arising about whether it’s a data mine for facial recognition software. It’s easy to see how that’s possible, because the meme incorporates the perfect data set: millions of people self-attesting that this photo is them 10 years ago, and that one is them now, attached to the same identity.  

Your face is increasingly becoming a key part of your online identity. Giving it out without securing it could come back to haunt you.  

The old notion of a photo – a moment in time, captured and shared with family and close friends in an innocuous setting – is long gone. Photos can be weaponized and used to attack your online identity, to defraud you, even to break into your devices. 

Those pics are part of your biometric data

Biometric data include your face, your thumbprint, retinal scans, and in China software has been developed that can even identify people solely by the way they walk! “Gait recognition” surveillance may (hopefully) never be part of life in the Western world, but other less obvious ways of tracking people are on the rise, such as DNA kits sold by various companies, some of whom disclose in their terms of service that by participating, you grant royalty-free, perpetual licence to your DNA to the company doing the testing. 

These DNA kits could reveal that you have a genetic disease, and if that info were ever sold to insurance companies, that could adversely impact you and your family.  

How private do we need to become?

Photo sharing is huge and it’s getting people in major trouble, from the “sextortion” of Tony Clement, to “deepfakes” that create a realistic depiction of someone from the massive volume of available photos, applying their image to videos that look scarily legitimate.  

The more images of yourself out there, the more data there is to work with, and the easier it is for your image to be weaponized against you. 

It’s probably not realistic to tell people to stop sharing photos of themselves online, but it doesn’t hurt to be skeptical and think carefully about how your participation in these things – DNA testing kits, quizzes on social media, trends like the 10-year challenge – could be used against you. 

Privacy is not dead!

If anything, privacy is more important now than ever, as tech users are realizing that the more info they give out, the more they may be compromising their identity – their whole life. Privacy requires people to be educated and empowered about the limits and failings of technology, and to act accordingly.