Skip to content

“We [Don't] Care About Your Privacy”

Filtered photo of a metal container left on the street, with on it the painted sentence "We've updated our privacy policy." with three faded happy face icons around it. On and around the container are icons of hidden red flags.

They all claim "Your privacy is important to us." How can we know if that's true? With privacy washing being normalized by big tech and startups alike, it becomes increasingly difficult to evaluate who we can trust with our personal data. Fortunately, there are red (and green) flags we can look for to help us.

If you haven't heard this term before, privacy washing is the practice of misleadingly, or fraudulently, presenting a product, service, or organization as being trustworthy for data privacy, when in fact it isn't.

Privacy washing isn't a new trend, but it has become more prominent in recent years, as a strategy to gain trust from progressively more suspicious prospect customers. Unless politicians and regulators start getting much more serious and severe about protecting our privacy rights, this trend is likely to only get worse.

In this article, we will examine common indicators of privacy washing, and the "red" and "green" flags we should look for to make better-informed decisions and avoid deception.

Spotting the red flags

Marketing claims can be separated from facts by an abysmally large pit of lies

It's important to keep in mind that it's not the most visible product that's necessarily the best. More visibility only means more marketing. Marketing claims can be separated from facts by an abysmally large pit of lies.

Being able to distinguish between facts and marketing lies is an important skill to develop, doubly so on the internet. After all, it's difficult to find a single surface of the internet that isn't covered with ads, whether in plain sight or lurking in the shadows, disguised as innocent comments and enthusiastic reviews.

So what can we do about it?

There are some signs that should be considered when evaluating a product to determine its trustworthiness. It's unfair this burden falls on us, but sadly, until we get better regulations and institutions to protect us, we will have to protect ourselves.

It's also important to remember that evaluating trustworthiness isn't binary, and isn't permanent. There is always at least some risk, no matter how low, and trust should always be revoked when new information justifies it.

Examine flags collectively, and in context

It's important to note that each red flag isn't necessarily a sign of untrustworthiness on its own (and the same is true for green flags, in reverse). But the more red flags you spot, the more suspicious you should get.

Taken into account together, these warning signs can help us estimate when it's probably reasonably safe to trust (low risk), when we should revoke our trust, or when we should refrain from trusting a product or organization entirely (high risk).

🚩 Conflict of interest

Conflict of interest is one of the biggest red flag to look for. It comes in many shapes: Sponsorships, affiliate links, parent companies, donations, employments, personal relationships, and so on and so forth.

Online influencers and educators regularly receive offers to "monetize their audience with ease" if they accept to overtly or subtly advertise products within their content. If this isn't explicitly presented as advertising, then there is obviously a strong conflict of interest. The same is true for affiliate links, where creators receive a sum of money each time a visitor clicks on a link or purchase a product from this link.

It's understandable that content creators are seeking sources of revenue to continue doing their work. This isn't an easy job. But a trustworthy content creator should always disclose any potential conflicts of interest related to their content, and present paid advertising explicitly as paid advertising.

What to do?

Before trusting content online, try to examine what the sources of revenue are for this content. Look for affiliate links and sponsorships, and try to evaluate if what you find might have influenced the impartiality of the content.

Parent companies

This one is harder to examine, but is extremely important. In today's corporate landscape, it's not rare to find conglomerates of corporations with a trail of ownership so long it's sometimes impossible to find the head. Nevertheless, investigating which company owns which is fundamental to detect conflicts of interest.

For example, the corporation Kape Technologies is the owner of both VPN providers (ExpressVPN, CyberGhost, Private Internet Access, and Zenmate) and websites publishing VPN reviews. Suspiciously, their own VPN providers always get ranked at the top on their own review websites. Even if there were no explicit directive for the websites to do this, which review publisher would dare to rank negatively a product owned by its parent company, the one keeping them alive? This is a direct and obvious conflict of interest.

What to do?

Look at the Terms of Service and Privacy Policy (or Privacy Notice) for declarations related to a parent company. This is often stated there. You can also examine an organization's About page, Wikipedia page, or even the official government corporate registries to find out if anyone else owns an organization.

Donations, event sponsorships, and other revenues

When money is involved, there is always a potential for conflict of interest. If an organization receives a substantial donation, grant, or loan from another, it will be difficult to remain impartial about it. Few would dare to talk negatively about a large donor.

This isn't necessarily a red flag in every situation of course. For example, a receiving organization could be in a position where the donor's values are aligned, or where impartiality isn't required. Nevertheless, it's something important to consider.

In 2016, developer and activist Aral Balkan wrote about how he refused an invitation to speak at a panel on Surveillance Capitalism at the Computers, Privacy, & Data Protection Conference (CPDP). The conference had accepted sponsorship from an organization completely antithetical to its stated values: Palantir.

Balkan wrote: "The sponsorship of privacy and human rights conferences by corporations that erode our privacy and human rights is a clear conflict of interests that we must challenge."

How could one claim to defend privacy rights while receiving money from organizations thriving on destroying them?

This is a great example of how sponsors can severely compromise not only the impartiality of an organization, but also its credibility and its values. How could the talks being put forward at such a conference be selected without bias? How could one claim to defend privacy rights while receiving money from organizations thriving on destroying them?

It's worth nothing that this year's CPDP 2025 sponsors included Google, Microsoft, TikTok, and Uber.

What to do?

Examine who sponsors events and who donates to organizations. Try to evaluate if an organization or event received money from sources that could be in contradiction with its values. Does this compromise its credibility? If a sponsor or donor has conflicting values, what benefit would there be for the sponsor supporting this event or organization?

Employment and relationships

Finally, another important type of conflicts of interest to keep in mind are the relationships between the individuals producing the content and the companies or products they are reporting on.

For example, if a content creator is working or previously worked for an organization, and the content requires impartiality, this is a potential conflict of interest that should be openly disclosed.

The same can be true if this person is in a professional or personal relationship with people involved with the product. This can be difficult to detect of course, and is not categorically a sign of bias, but it's worth paying attention to it in our evaluations.

What to do?

Look for disclaimers related to conflict of interest. Research the history of an organization to gain a better understanding of the people involved. Wikipedia can be a valuable resource for this.

🚩 Checkbox compliance and copy-paste policies

Regrettably, many organizations have no intention whatsoever to genuinely implement privacy-respectful practices, and are simply trying to get rid of these "pesky privacy regulation requirements" as cheaply and quickly as possible.

They treat privacy law compliance like an annoying list of annoying tasks. They think they can complete this list doing the bare cosmetic minimum, so that it will all look like it's compliant (of course, it is not).

A good clue this mindset might be ongoing in an organization is when it uses a very generic privacy policy and terms of service, policies that are often simply copy-pasted from another website or AI-generated (which is kind of the same thing).

Not only this is extremely unlikely to truly fulfill the requirements for privacy compliance, but it also almost certainly infringes on copyright laws.

What to do?

If you find few details in a privacy policy that are specific to the organization, try copying one of its paragraph or long sentence in a search engine (using quotation marks around it to find the exact same entry). This will help detect where other websites are using the same policy.

Some might be using legitimate templates of course, but even legal usable policy templates need to be customized heavily to be compliant. Sadly, many simply copy-paste material from other organizations without permission, or use generative AI tools doing the same.

If the whole policy is copied without customization, it's very unlikely to describe anything true.

🚩 Meaningless privacy compliance badges

Many businesses and startups have started to proudly display privacy law "compliance badges" on their websites, to reassure potential clients and customers.

While it can indeed be reassuring at first glance to see "GDPR Compliant!", "CCPA Privacy Approved", and other deceitful designs, there is no central authority verifying this systematically. At this time, anyone could decide to claim they are "GDPR Compliant" and ornate their website with a pretty badge.

Moreover, if this claim isn't true, this is fraudulent of course and likely to break many laws. But some businesses bet on the assumption that no one will verify or report it, or that data protection authorities simply have better things to do.

While most privacy regulations adopt principles similar to the European General Data Protection Regulation (GDPR) principle of accountability (where organizations are responsible for compliance and for demonstrating compliance), organizations' assertions are rarely challenged or audited. Because most of the time there isn't anyone verifying compliance unless there's an individual complaint, organizations have grown increasingly fearless with false claims of compliance.

What to do?

Never trust a claim of privacy compliance at face value, especially if it comes in the shape of a pretty website badge.

Examine organizations' privacy policies, contact them and ask questions, look for independent reviews, investigate to see if an organization has been reported before. Never trust a first-party source to tell you how great and compliant the first-party is.

🚩 Fake reviews

Fake reviews are a growing problem on the internet. And this was only aggravated by the arrival of generative AI. There are so many review websites that are simply advertising in disguise. Some fake reviews are generated by AI, some are paid for or influenced by sponsorships and affiliate links, some are in conflict of interest from parent companies, and many are biased in other ways. Trusting an online review today feels like trying to find the single strand of true grass through an enormous plastic haystack.

Genuine reviews are (were?) usually a good way to get a second opinion while shopping online and offline. Fake reviews pollute this verification mechanism by duping us in believing something comes from an independent third-party, when it doesn't.

What to do?

Train yourself to spot fake reviews. There are many signs that can help with this, such as language that suspiciously uses the complete and correct product and feature brand each time, reviewers who published an unnatural quantity of reviews in a short period of time, excessively positive review, negative reviews talking about how great this other brand is, etc. Make sure to look for potential conflicts of interest as well.

🚩 Fake AI-generated content

Sadly, the internet has been infected by a new plague in recent years: AI-generated content. This was mentioned before, but truly deserves its own red flag.

Besides AI-generated reviews, it's important to know there are also now multiple articles, social media posts, and even entire websites that are completely AI-generated, and doubly fake. This affliction makes it even harder for readers to find genuine sources of reliable information online. Learning to recognize this fake content is now an internet survival skill.

What to do?

If you find a blog that publishes 5 articles per day from the same author every day, be suspicious. Look for publication dates, and if they are inhumanly close to each other, this can be a sign of AI-generated content.

When reading an article, AI-generated text will often use very generic sentences, you will rarely find the colorful writing style that is unique to an author. AI-writing is generally bland with no personality shinning through. You might also notice the writing feels circular. It will seems like it's not really saying anything specific, except for that one thing, that is repeated over and over.

🚩 Excessive self-references

When writing an article, review, or a product description, writers often use text links to add sources of information to support their statements, or to provide additional resources to readers.

When all the text links in an article point to the same source, you should grow suspicious. If all the seemingly external links only direct to material created from the original source, this can give the impression of supporting independent evidences, when in fact there aren't any.

Of course, organizations will sometimes refer back to their own material to share more of what they did with you (we certainly do!), but if an article or review only uses self-references, and these references also only use self-references, this could be a red flag.

What to do?

Even if you do not click on links, at least hover over them to see where they lead. Usually, trustworthy sources will have at least a few links pointing to external third-party websites. A diversity of supporting resources is important when conducting impartial research, and should be demonstrated there whenever relevant.

🚩 Deceptive designs

Deceptive design can be difficult to spot. Sometimes it's obvious, like a cookie banner with a ridiculously small "reject all" button, or an opt-out option hidden under twenty layers of menu.

Most of the time however, deceptive design is well-planned to psychologically manipulate us to pick the option most favorable to the company, at the expense of our privacy. The Office of the Privacy Commissioner of Canada has produced this informative web page to help us recognize better deceptive design.

What to do?

Favor tools and services that are built for privacy from the ground up, and always default to privacy first. Train yourself to spot deceptive patterns and be persistent to choose the most privacy-protective option.

Don't be afraid to say no, to reject options and products, and to also report them when deceptive design becomes fraudulent or infringes privacy laws.

🚩 Buzzword language

Be suspicious of buzzword language, especially when it becomes excessive or lacks any supportive evidences. Remember that buzzwords aren't a promise, but only marketing to get your attention. These words don't mean anything on their own.

Expressions like "military-grade encryption" are usually designed to inspire trust, but there is no such thing that grants better privacy. Most military organizations likely use industry-standard encryption from solid and tested cryptographic algorithms, like any trustworthy organizations and privacy-preserving tools do.

Newer promises like "AI-powered" are completely empty, if not scary. Thankfully, many "AI-powered" apps aren't really AI-powered, and this is a good thing because "AI" is more often a danger to your privacy, and not an enhancement at all.

What to do?

Remain skeptical of expressions like "privacy-enhancing", "privacy-first approach", "fully-encrypted", or "fully compliant" when these claims aren't supported with evidences. Fully encrypted means nothing if the encryption algorithm is weak, or if the company has access to your encryption keys.

When you see claims of "military-grade encryption", ask which cryptographic algorithms are used, and how encryption is implemented. Look for evidences and detailed information on technological claims. Never accept vague promises as facts.

🚩 Unverifiable and unrealistic promises

Along the same lines, many businesses will be happy to promise you the moon. But then, they become reluctant to explain how they will get you the moon, how they will manage to give the moon to multiple customers at once, and what will happen to the planet once they've transported the moon away from its orbit to bring it back to you on Earth... Maybe getting the moon isn't such a good promise after all.

companies promising you software that is 100% secure and 100% private are either lying or misinformed themselves

Similarly, companies promising you software that is 100% secure and 100% private are either lying or misinformed themselves.

No software product is 100% secure and/or 100% private. Promises like this are unrealistic, and (fortunately for those companies) often also unverifiable. But an unverifiable claim shouldn't default to a trustworthy claim, quite the opposite. Trust must be earned. If a product cannot demonstrate how their claims are true, then we must remain skeptical.

What to do?

Same as for buzzwords and compliance claims, never trust at face value. If there are no ways for you to verify a claim, remain skeptical and aware this promise could be empty.

Be especially suspicious with organizations repeating exaggerated guarantees such as 100% secure. Organizations that are knowledgeable about security and privacy will usually restrain from such binary statement, and tend to talk about risk reduction with nuanced terms like "more secure", or "more private".

🚩 Flawed or absent process for data deletion

Examining an organization's processes for data deletion can reveal a lot on their privacy practices and expertise. Organizations that are knowledgeable about privacy rights will usually be prepared to respond to data deletion requests, and will already have a process in place, a process that doesn't require providing more information than they already have.

Be especially worried if:

  • You don't find any mentions of data deletion in their privacy policy.

  • From your account's settings or app, you cannot find any option to delete your account and data.

  • The account and data deletion process uses vague terms that make it unclear if your data will be truly deleted.

  • You cannot find an email address to contact a privacy officer in their privacy policy.

  • The email listed in their privacy policy isn't an address dedicated to privacy.

  • You emailed the address listed but didn't get any reply after two weeks.

  • Their deletion process requires to fill a form demanding more information than they already have on you, or uses a privacy-invasive third-party like Google Forms.

  • They argue with you when you ask for legitimate deletion.

What to do?

If this isn't already explicitly explained in their policies (or if you do not trust their description), find the privacy contact for an organization and email them before using their products or services, to ask about their data deletion practices.

Ask in advance which information will be required from you in order to delete your data. Also ask if they keep any data afterward, and (if they do) what data they keep. Once data is shared, this could be much harder to deal with. It's best to verify data deletion processes before trusting an organization with our data.

🚩 False reassurances

The goal of privacy washing is to reassure worried clients, consumers, users, patients, and investors into using the organization's products or services. But making us feel more secure doesn't always mean that we are.

Privacy theaters

You might have heard the term "security theater" already, but there's also "privacy theater". Many large tech organizations have mastered this art for decades now. In response to criticisms about their dubious privacy practices, companies like Facebook and Google love to add seemingly "privacy-preserving" options to their software's settings, to give people the impression it's possible to use their products while preserving their privacy. But alas, it is not.

Unfortunately, no matter how much you "harden" your Facebook or Google account for privacy, these corporations will keep tracking everything you do on and off their platforms. Yes, enabling these options might very slightly reduce exposure for some of your data (and you should enable them if you cannot leave these platforms). However, Facebook and Google will still collect enough data on you to make them billions in profits each year, otherwise they wouldn't implement these options at all.

Misleading protections

The same can be said for applications that have built a reputation on a supposedly privacy-first approach like Telegram and WhatsApp. In fact, the protections these apps offer are only partial, often poorly explained to users, and the apps still collect a large amount of data and/or metadata.

When deletion doesn't mean deletion

In other cases, false reassurance comes in the form of supposedly deleted data that isn't truly deleted. In 2019, Global News reported on Amazon's Alexa virtual assistant speaker that didn't always delete voice-recorded data as promised. Google was also found guilty of this, even after receiving an order from UK's Information Commissioner's Office.

This can also happen with cloud storage services that display an option to "delete" a file, when in fact the file is simply hidden from the interface, while remaining available in a bin directory or from version control.

How many unaware organizations might have inadvertently (or maliciously) kept deleted data by misusing their storage service and version control system? Of course, if a copy of the data is kept in backups or versioning system, then it's not fully deleted, and doesn't legally fulfill a data deletion requirement.

What to do?

Do not simply trust a "privacy" or "opt-out" option. Look at the overall practices of an organization to establish trust. Privacy features have no value at all if we cannot trust the organization that implemented them.

Investigate to find an organization's history of data breaches and how they responded to it. Was this organization repeatedly fined by data protection authorities? Do not hesitate to ask questions to an organization's privacy officer about their practices. And look for independent reviews of the organization.

🚩 New and untested technologies

Many software startups brag about how revolutionary their NewTechnology™ is. Some even dare to brag about a "unique" and "game-changing" novel encryption algorithm. You should not feel excited by this, you should feel terrified.

For example, any startups serious about security and privacy will know that you should never be "rolling your own crypto".

Cryptography is a complex discipline, and developing a robust encryption algorithm takes a lot of time and transparent testing to achieve. Usually, it is achieved with the help of an entire community of experts. Some beginners might think they had the idea of the century, but until their algorithm has been rigorously tested by hundreds of experts, this is an unfounded claim.

The reason most software use the same few cryptographic algorithms for encryption, and usually follow strict protocols to implement them, is because this isn't an easy task to do, and the slightest mistake could render this encryption completely useless. The same can be true for other types of technology as well.

Novel technologies might sound more exciting, but proven and tested technologies are usually much more reliable when it comes to privacy, and especially when it comes to encryption.

What to do?

If a company brags about its new technology, investigate what information they have made available about it. Look for a document called a White Paper, which should describe in technical details how the technology works.

If the code is open source, look at the project's page and see how many people have worked on it, who is involved, since how long, etc.

More importantly, look for independent audits from trustworthy experts. Read the reports and verify if the organization's claims are supported by professionals in the field.

🚩 Critics from experts

if you find multiple reports of privacy experts raising the alarm about it, consider this a dark-red red flag

No matter how much an organization or product claims to be "privacy-first", if you find multiple reports of privacy experts raising the alarm about it, consider this a dark-red red flag.

If a company has been criticized by privacy commissioners, data protection authorities, privacy professionals, and consumer associations, especially if this has happened repeatedly, you should be very suspicious.

Sometimes, criticized corporations will use misleading language like "we are currently working with the commissioner", this isn't a good sign.

The marketing department will try to spin any authority audits into something that sounds favorable to the corporation, but this is only privacy washing. They would not be "working with" the privacy commissioner if they hadn't been forced to in the first place. And they wouldn't have been forced to if they truly had privacy-respectful practices.

What to do?

Use a search engine to look for related news using keywords such as the company's name with "data breach", "fined", or "privacy".

Check the product's or corporation's Wikipedia page, sometimes there will be references to previous incidents and controversies listed there. Follow trustworthy sources of privacy and security news to stay informed about reported data leaks and experts raising the alarm.

Looking for the green(ish) flags

Now that we have discussed some red flags to help us know when we should be careful, let's examine the signs that can be indicator of trustworthiness.

Like for red flags, green flags should always be taken into context and considered together. One, or even a few green flags (or greenish flags) aren't on their own a guarantee that an organization is trustworthy. Always remain vigilant, and be ready to revoke your trust at any time if new information warrants it.

Independent reviews

Independent reviews from trustworthy sources can be a valuable resource to help to determine if a product is reliable. This is never a guarantee of course, humans (even experts) can also make mistakes (less than AI, but still) and aren't immune to lies.

However, an impartial review conducted by an expert in the field has the benefit of someone who has likely put many hours investigating this topic, something you might understandably not always have the time to do yourself. But be careful to first evaluate if this is a genuine unbiased assessment, or simply marketing content disguised as one.

Independent audits

Similarly, independent audits from credible organizations are very useful to assess a product's claims. Make sure the company conducting the audit is reputable, impartial, and that you can find a copy of the audit's report they produced, ideally from a source that isn't the audited company's website (for example, the auditing organization might provide access to it transparently).

Transparency

Transparency helps a lot to earn trust, and source code that is publicly available helps a lot with transparency. If a piece of software publishes its code for anyone to see, this is already a significant level of transparency above any proprietary code.

Open source code is never a guarantee of security and privacy, but it makes it much easier to verify any organization assertions. This is almost impossible to do when code is proprietary. Because no one outside the organization can examine the code, they must be trusted on their own words entirely. Favor products with code that is transparently available whenever possible.

Verifiable claims

If you can easily verify an organization's claims, this is a good sign. For example, if privacy practices are explicitly detailed in policies (and match the observed behaviors), if source code is open and easy to inspect, if independent audits have confirmed the organization's claims, and if the organization is consistent with its privacy practices (in private as much as in public), this all helps to establish trust.

Well-defined policies

Trustworthy organizations should always have well-defined, unique, and easy to read privacy policies and terms of service. The conditions within it should also be fair. You shouldn't have to sell your soul to 1442 marketing partners just to use a service or visit a website.

Read an organization's privacy policy (or privacy notice), and make sure it includes:

  • Language unique to this organization (no copy-paste policy).

  • Disclosure of any parent companies owning this organization (if any).

  • A dedicated email address to contact for privacy-related questions and requests.

  • Detailed information on what data is collected for each activity. For example, the data collected when you use an app or are employed by an organization shouldn't be bundled together indistinctly with the data collected when you simply visit the website.

  • Clear limits on data retention periods (when the data will be automatically deleted).

  • Clear description of the process to follow in order to delete, access, or correct your personal data.

  • A list of third-party vendors used by the organization to process your information.

  • Evidences of accountability. The organization should demonstrate accountability for the data it collects, and shouldn't just transfer this responsibility to the processors it uses.

Availability

Verify availability. Who will you contact if a problem arises with your account, software, or data? Will you be ignored by an AI chatbot just repeating what you've already read on the company's website? Will you be able to reach out to a competent human?

If you contact an organization at the listed privacy-dedicated email address to ask a question, and receive a thoughtful non-AI-generated reply within a couple of weeks, this can be a good sign. If you can easily find a privacy officer email address, a company's phone number, and the location where the organization is based, this also can be encouraging signs.

Clear funding model

If a free service is provided by a for-profit corporation, you should investigate further. The old adage that if you do not pay for a product you are the product is sadly often true in tech, and doubly so for big tech.

Before using a new service, try to find what the funding model is. Maybe it's a free service run by volunteers? Maybe they have a paid tier for businesses, but remain free for individual users? Maybe they survive and thrive on donations? Or maybe everyone does pay for it (with money, not data).

Look for what the funding model is. If it's free, and you can't really find any details on how it's financed, this could be a red flag that your data might be used for monetization. But if the funding model is transparent, fair, and ethical, this can be a green flag.

Reputation history

Some errors are forgivable, but others are too big to let go. Look for an organization's track record to help to evaluate its reputation overtime. Check if there was any security or privacy incidents, or expert criticisms, and check how the organization responded to it.

If you find an organization that has always stuck to its values (integrity), is still run by the same core people in recent years (stability), seems to have a generally good reputation with others (reputability), and had few (or no) incidents in the past (reliability), this can be a green flag.

Expert advice

Seek expert advice before using a new product or service. Look online for reliable and independent sources of recommendations (like Privacy Guides!), and read thoroughly to determine if the description fits your privacy needs. No tool is perfect to protect your privacy, but experts will warn you about a tool's limitations and downsides.

There's also added value in community consensus. If a piece of software is repeatedly recommended by multiple experts (not websites or influencers, experts), then this can be a green flag that this tool or service is generally trusted by the community (at this point in time).

Take a stand for better privacy

Trying to evaluate who is worthy of our trust and who isn't is an increasingly difficult task. While this burden shouldn't fall on us, there are unfortunately too few institutional protections we can rely on at the moment.

Until our governments finally prioritize the protection of human rights and privacy rights over corporate interests, we will have to protect ourselves. But this isn't limited to self-protection, our individual choices also matter collectively.

Each time we dig in to thoroughly investigate a malicious organization and expose its privacy washing, we contribute in improving safety for everyone around us.

Each time we report a business infringing privacy laws, talk publicly about our bad experience to get our data deleted, and more importantly refuse to participate in services and products that aren't worthy of our trust, this all helps to improve data privacy for everyone overtime.

Being vigilant and reporting bad practices is taking a stand for better privacy. We must all take a stand for better privacy, and expose privacy washing each time we spot it.


Join our forum to comment on this article.

Thank you for reading, and please consider sharing this post with your friends. Privacy Guides is an independent, nonprofit media outlet. We don't have ads or sponsors, so if you liked this work your donation would be greatly appreciated. Have a question, comment, or tip for us? You can securely contact us at @privacyguides.01 on Signal.