Media Junction logo Media Junction logo
    • what we do
      • Website Design Create a beautiful website that is easy-to-use and has the end users' goals in mind.
      • RevOps Learn how to use your new HubSpot tools from a Certified HubSpot Trainer.
      • Inbound Marketing Strategies to drive traffic, leads and conversions

      • HubSpot Development Build a custom API, develop a custom application, or integrate software.
      • Content Services Communicate what you do in a clear way that connects with your customers.
      • Technical Consulting Not sure how to implement HubSpot into your business? We can help.
      • Brand Services Develop your voice and visual identity to attract the right customers to your business.
      • Ongoing Support Services Don't have the support you need to manage your site? HubSpot portal? We can help.
    • let’s talk
      • Meet with Us
      • Request a Quote
      • Call 651.426.8669
      • Submit a Ticket
    • services
      • HubSpot Migration Ready to switch to the world's leading CRM? Let's do it.
      • HubSpot Consulting Want to talk to a HubSpot expert or trainer? Buy an hour
      • HubSpot Implementation Get your HubSpot portal set up right & start seeing ROI
      • HubSpot Support Get the support you need when you need it—fast!
      • HubSpot Admin Don't need a full-time admin to manage your HubSpot portal?
      • HubSpot Onboarding Get started on the path of HubSpot implementation success with our customized onboarding.
    • platform
      • What is HubSpot?
      • Free CRM
      • Marketing Hub
      • Content Hub
      • Sales Hub
      • Service Hub
      • Operations Hub
  • Our Work
  • Pricing
  • About
  • Work With Us
west Return to blog

AI and Data Privacy: How to Stay Compliant and Protect Your Business

Kevin Phillips Kevin Phillips
  • Generative AI
  • May 8, 2025
traffic light against a blue gradient background representing compliance and safety
AI and Data Privacy: How to Stay Compliant and Protect Your Business
17:16

AI tools are everywhere in business today, from smart CRMs to content generators. But here’s the deal – in the rush to adopt AI, data privacy often becomes the elephant in the room that no one is talking about.

A recent survey found nearly 75% of tech professionals rank data privacy among their top concerns with AI, and 40% call it their number one worry​. If you’re a CMO, CEO, or IT leader at a mid-sized B2B company, you might be excited about AI’s potential, yet uneasy about how it handles sensitive information.

At media junction, we get it. We’ve been helping companies navigate digital transformations for over 25 years (yes, since dial-up internet) and we’re a HubSpot Elite Partner to boot.

We’ve launched countless websites and marketing campaigns, and we’ve seen first-hand how trust can make or break customer relationships. When it comes to AI and data, we’ve got the battle scars and the success stories – so consider us your friendly guide on this topic.

By the end of this article, you’ll be armed with a clear understanding of what “data privacy” really means in the age of AI, which privacy laws you need to pay attention to, how popular AI tools actually use (and misuse) your data, and practical tips to avoid privacy pitfalls while still using AI effectively.

In other words, you’ll learn how to embrace AI in your business without landing on the front page over a data privacy scandal. Sound good? Let’s dive in!

what data privacy really means when using AI

When we talk about “data privacy” in the context of AI, we’re really talking about people’s control over their personal information. Data privacy (or information privacy) means that an individual should have authority over how their personal data is collected, used, stored, and shared​.

In plain language: if it’s your data (say, your name, email, or browsing history), you get a say in what happens to it. AI doesn’t change that principle, but it does crank up the complexity.

Let’s break down a few key terms in simple terms:

  • Personally Identifiable Information (PII): This is any information that could identify a specific person​ .Think names, addresses, emails, phone numbers, IP addresses, even things like job titles when combined with a company name. If a piece of data lets you figure out who someone is, it’s PII.
  • Consent: Consent means the person gave permission to use their data, ideally in a clear and informed way. It’s the difference between “Sure, you can use my email for that AI demo” and finding out your email was fed into a machine learning model without you knowing. Good AI ethics (and laws like GDPR) say you should always get explicit consent for collecting or using someone’s personal data.
  • Data Processing: This is a fancy term for anything you do with data. Collecting it, analyzing it with an AI algorithm, storing it in a database, feeding it into a predictive model – all of that is “processing” personal data. Under privacy laws, you usually need a valid reason (a “lawful basis”) to process someone’s data, especially if AI is involved.

Why does AI make data privacy trickier? Because AI systems gobble up lots of data – sometimes terabytes of it – to learn patterns​. Inevitably, some of that data can be sensitive personal info (like things people posted on social media, or location data from smartphones, or even snippets of health records).

The more personal data an AI ingests, the higher the stakes if something goes wrong. If an AI isn’t designed with privacy in mind, it could end up revealing something personal about someone that should’ve stayed private.

3 privacy laws every business should care about

No, you don’t need to become a lawyer, but you should know the “big three” privacy regulations that could impact your business. 

Here’s a plain-English rundown:

GDPR (Europe)

The General Data Protection Regulation is often called the world’s strictest privacy law​. It applies to any company handling personal data of EU residents, regardless of where the company is based.

GDPR demands things like clear consent, data transparency, and a legal basis for processing personal info. Enforcement is serious business – fines can reach up to €20 million or 4% of global annual revenue for violations​.

In short, GDPR set the global benchmark for privacy standards.

CCPA/CPRA (California, USA)

The California Consumer Privacy Act (amended by the CPRA) is the U.S. front-runner for privacy regulation. It gives California residents the right to know what personal data companies collect about them, to delete that data, and to opt out of the sale of their info​.

If you have customers or users in California (and meet certain business size criteria), you likely fall under CCPA. The 2023 CPRA update added even more protections (like the right to correct data and an enforcement agency).

Essentially, CCPA/CPRA is California’s way of saying “businesses, handle personal data responsibly or face consequences.”

HIPAA (Healthcare, USA)

The Health Insurance Portability and Accountability Act is all about medical data. It applies to healthcare providers, insurers, and any business dealing with protected health information.

HIPAA sets strict standards for how patient data is stored, used, and shared​. For most businesses outside the healthcare space, HIPAA won’t apply. But if your company ever touches medical or health-related data (say you’re building an AI tool for patient analytics or running a wellness program with health metrics), you need to follow HIPAA rules or risk heavy penalties.

 

These laws are about giving people control and confidence. They’re not roadblocks meant to stifle innovation; they’re guardrails that help you build trust. As privacy regulations expand globally, getting on top of compliance now will save you headaches (and potential fines) later.

what AI tools actually do with your data

So what actually happens to your data when you use an AI tool? Here’s the scoop: most AI tools need data from you to do their magic.

You type in a prompt or upload some customer info, and the AI processes it to give you a result (a summary, a suggestion, a prediction, etc.). Behind the scenes, that data doesn’t just evaporate once you get your answer.

It might be stored on the AI provider’s servers, and in many cases, it’s used to train or improve the AI models further. In other words, your data could be feeding the AI brain so it gets smarter over time.

Take HubSpot’s Breeze AI as an example. Breeze is baked into HubSpot’s platform to help with everything from writing content to enriching CRM records. When you use it, HubSpot’s systems are handling your data to generate those results. HubSpot even advises users not to input sensitive personal data into AI prompts​.

Why? Because that information might be sent to the AI model (potentially an external AI service powering Breeze) and could be stored or seen by others. If a major platform like HubSpot says “don’t share sensitive info” with their AI, take that as a rule of thumb for any AI tool.

Some AI tools only use your data for a one-time result, while others keep it to learn and improve. For instance, certain AI models might update themselves based on what users input. (Early versions of ChatGPT did this, which is why some companies were wary about employees using it for confidential info.)

The key is to know whether your tool is a “continuous learner” that stores inputs, or a “one-and-done” system that doesn’t retain data. It’s also smart to read the fine print.

One major platform tried to claim broad rights over user data for its AI and faced public backlash, forcing it to backtrack​. The lesson? Make sure you understand and control how your data is being used.

Finally, be aware of data leaks and breaches. AI tools are software like any other, and they can have bugs or security holes. For example, ChatGPT once accidentally showed some users the titles of other people’s chat histories.​ Sure, it wasa small glitch, but a reminder that even “private” AI interactions aren’t immune to bugs.

If you’re feeding proprietary business data into an AI and the provider gets hacked or has a bug, that data could be exposed. In short, treat AI tools with the same caution as any cloud service: assume what you put in might stick around somewhere.

Check if the tool lets you delete your data or promises not to use it beyond providing the service. The convenience of AI is amazing – just go in with your eyes open about where your data is going.

best practices for protecting your business and your customers

Now that we’ve covered the risks and rules, let’s talk solutions. How can you use AI in your marketing or operations without running into privacy nightmares?

Here’s a handy checklist of best practices to keep your company and your customers safe.

use data encryption

Encryption is like putting your data in a coded safe – only those with the key can access it. Make sure any AI tools or databases you use encrypt sensitive data both in transit (as it travels over the internet) and at rest (when it’s stored on servers).

That way, even if someone intercepts the data or there’s a breach, the info looks like gibberish to unauthorized eyes. (Many regulations recommend techniques like encryption​ because it’s one of the best defenses against data leaks.)

only collect what you need 

Adopt a “data minimalism” mindset. The less personal data you collect and store, the lower the risk if something goes wrong. Don’t collect 10 pieces of info when you only really need 3 for your AI tool to do its job.

Privacy laws even encourage this practice for good reason – you can’t lose or misuse data you never collected. Limiting data collection to just what’s essential​ also shows customers that you’re respecting their info and not hoarding it.

get clear consent

Whenever possible, be upfront and ask permission. If you plan to feed customer data into an AI-powered system, let them know in plain English and get their OK.

For example, if you’re using an AI chatbot that will store customer questions, your website can have a brief notice like, “By using this chat, you agree to our AI processing of your info.”

It’s not just about legal compliance – it’s about being honest and transparent. Customers appreciate when you ask rather than assume. And if they say no, respect that decision.

vet your vendors

Not all AI tools are created equal. Before you trust a third-party AI platform with your data, do some homework. Check their privacy policy and security practices.

Do they use your data to train their models? Do they share data with any partners? Ideally, choose vendors that commit to not selling or misusing your data.

If possible, sign a data processing agreement (DPA) that binds them to proper privacy standards. Keep in mind, when you outsource to a vendor you still carry the accountability if something goes wrong – it’s your brand on the line.

appoint a data protection officer (DPO) or privacy lead

Even if you’re not legally required to have one, designate someone as your point person for privacy. It could be a manager who wears this hat in addition to their regular duties or an external consultant.

Their job is to keep an eye on data practices, train the team on privacy basics, and stay updated on new laws. Having a dedicated privacy champion helps catch issues before they become big problems.

document your data flows

You can’t protect what you don’t understand. Map out what data you collect, where it goes, and who has access to it.

For instance, if you’re using an AI email marketing tool, document: it takes customer emails and names from our CRM, sends them to the AI provider’s cloud to generate subject lines, then stores the results back in our database.

This “data map” will highlight points of risk (e.g., sending data to a vendor – is it encrypted and under contract?) and makes it easier to answer customer requests like “delete my data,” because you know exactly where their data lives.

 

Think of these best practices as your AI privacy playbook. They’re mostly common sense, but you’d be surprised how many companies skip them in the excitement of trying a new AI tool. By ticking off this checklist, you greatly reduce the chance of a privacy fiasco and instead get to enjoy the benefits of AI with peace of mind.

what happens when privacy goes wrong: 2 real-world examples

Even with the best intentions, things can go south if privacy isn’t handled correctly. Let’s look at two cautionary tales that show what can happen when AI and automation collide with personal data in the wrong way, and what business leaders can learn from them.

Clearview AI – face recognition fiasco

Clearview AI is a tech company that built a facial recognition tool by scraping billions of images of people’s faces from social media and the web – all without permission.

They used those pictures to train an AI and then sold access to this massive face database. It doesn’t take a privacy lawyer to see the problem: people’s photos (biometric personal data) were used in an AI system without their consent.

Clearview got hit with multiple lawsuits for violating privacy laws, including Illinois’ biometric privacy law, and ended up in a legal settlement. As part of that, they’ve been barred from selling their face database to most private firms and had to stop providing it to certain jurisdictions​.

The company’s reputation took a huge hit. Lesson for leaders: Just because data is publicly available doesn’t mean you can use it however you want. Scraping and using personal data for AI without consent can be illegal and will definitely damage trust.

If your AI idea involves using people’s data in a new way, run it through legal and ethics checks first – it could save you from becoming the next headline.

Zoom’s AI misstep – learning the hard way about consent

In 2023, Zoom (the video conferencing giant) quietly updated its terms of service to give itself the right to use customer call data (videos, transcripts, etc.) for training its AI features.

They didn’t exactly spell this out to users in a prominent way. Naturally, users were unhappy at the thought of their meetings being used to train AI without consent. Facing a wave of backlash, Zoom quickly revised its terms to explicitly say it would not use customer content for AI without consent​.

In other words, they backtracked and promised not to opt everyone in by default. Lesson for leaders: Transparency is key. If you plan to use customer data for AI, you absolutely need to communicate that upfront and get consent.

Burying it in the fine print or assuming everyone is okay with it can backfire badly. Zoom’s experience shows that even well-established companies can misjudge public sentiment on privacy.

It’s a lot easier to do the right thing from the start than to clean up a PR mess afterward. In both cases, the theme is that ignoring privacy can cost you – whether in lawsuits, fines, or lost reputation.

But these examples also prompted positive change: Clearview’s case sparked calls for stricter AI regulations, and Zoom’s fiasco made other companies more careful about how they roll out AI features.

As a business leader, you don’t want to be the cautionary tale. It’s much better to learn from these stories and proactively put privacy first.

keep the momentum going: make privacy your competitive edge

Here’s the deal—AI isn’t slowing down, and neither is the scrutiny around how businesses handle data. The companies that lead the pack won’t be the ones with the flashiest tools, but the ones that know how to wield those tools with care, clarity, and responsibility.

When you build trust through smart, transparent AI practices, you don’t just avoid risks—you create real, lasting loyalty.

So, what’s next? It’s not about locking down every system or putting AI on pause. It’s about putting privacy at the heart of your AI strategy, from the first prompt to the final output.

Customers are paying attention. They want the convenience of AI without feeling like their data is being passed around like a party favor. Respect that, and you win trust.

Here at media junction, we’ve seen this play out again and again over the last 25+ years. The brands that do privacy right don’t just avoid problems—they build something better. We offer HubSpot training (including Breeze AI) and an AI Content Bootcamp to help your team create smart, compliant content that resonates.

Ready to go from reactive to proactive?  Reach out to see how we can help you get started using AI tools responsibly and ethically, all while protecting your data, earning customer trust, and creating content that actually moves the needle. 

content_ai_bootcamp_offer

Kevin Phillips
Kevin Phillips

Meet Kevin Phillips, your go-to expert for making digital content that gets noticed. With a decade of experience, Kevin has helped over 150 clients with their websites, messaging, and marketing strategies. He won the Impact Success Award in 2017 and holds certifications like Storybrand and They Ask, You Answer. Kevin dives deep into content creation, helping businesses engage customers and increase revenue. Outside of work, he enjoys snowboarding, disc golf, and being a dad to his three kids, blending professional insight with a dash of humor and passion.

See more posts by Kevin Phillips

subscribe to get the latest in your inbox.

Subscribe to our blog to get insights sent directly to your inbox.

recent posts

variety of different types of phones on a gradient background representing how sales teams communicate
Generative AI
5 Ways AI is Changing Sales Forever (And How to Keep Up)
five divers under water swimming towards camera with an aqua gradient background
Generative AI
Benefits of AI for Business: A Competitive Edge Made Easy
HubSpot
How a Local Church's Website Traffic Grew 1700% with Digital Marketing

we make working with us easy

Request a Quote Book a Meeting Call Us
the agency you'll love the agency you'll love the agency you'll love the agency you'll love
  • Careers
  • Blog
  • Contact Us
  • Privacy Policy
  • AI Policy
  • Sitemap
HubSpot Elite Solutions Partner badge
©2025 media junction® is a custom web design and inbound marketing agency based in Saint Paul, Minnesota, and is a registered trademark of Site Solutions, Inc., Minneapolis, Minnesota. All custom web designs are the virtual assets of Media Junction® and our respective clients. Duplication and/or reproduction of our work and/or content is prohibited and should not be used without written consent.