Resources

GDPR for Websites, Stores, and SaaS Products in 2026: A Practical Guide

om ·

Most GDPR content treats compliance as a legal box to tick. The reality in 2026 is that GDPR enforcement has matured significantly — multi-million-euro fines are routine, cookie banners are being audited at scale, and procurement teams now block deals over inadequate data handling. This is a practitioner’s guide to what actually matters when you’re building a website, ecommerce store, or SaaS product that processes EU residents’ data.

Key takeaways

  • GDPR applies whether you’re in the EU or not. If you process the personal data of EU residents — marketing site visitors, ecommerce customers, SaaS users — the regulation reaches you. Being a US company doesn’t exempt you. Recent fines confirm this is enforced extraterritorially.
  • The “under 250 employees” exemption is mostly a myth. Article 30(5) provides a narrow exemption only from full Records of Processing Activities, and only when processing is occasional, not high-risk, and doesn’t include special categories. For any modern startup with a CRM, analytics, or marketing automation, the exemption doesn’t apply.
  • Cookie banner enforcement is now systematic. The CNIL, the Italian DPA, and others are issuing fines for non-compliant banners (manipulative dark patterns, missing reject-all buttons, pre-loaded trackers). “We have a banner” is no longer adequate — the banner has to actually work.
  • Compliance closes deals. Enterprise procurement teams will request your DPA, sub-processor list, security posture, and data residency before signing. Doing this badly adds weeks (or kills) the deal. Doing it well is a sales accelerator.

Does GDPR apply to you?

The short answer for any company in 2026: probably yes. The technical scope is in Article 3 of the regulation:

  • You’re established in the EU. Any company with a real establishment (an office, a permanent presence) in any EU member state is in scope, regardless of where their customers are.
  • You offer goods or services to people in the EU. Your website is in English, takes Euros, ships to EU addresses, lists EU customers as case studies, has a German privacy policy translation — all of these are signals that you’re targeting EU residents. “We don’t have an EU office” doesn’t help if your website is clearly built for EU customers.
  • You monitor the behavior of people in the EU. Tracking pixels, ad retargeting, analytics that capture EU visitors — these put you in scope. Most websites with Google Analytics or Meta Pixel are technically in scope as soon as a single EU resident loads the page.

The honest framing: if your product is online and accessible globally, you’re in scope. “We’re too small to matter” was the assumption five years ago; in 2026, the regulators have stopped accepting it. Recent enforcement against US-based mid-market SaaS companies confirms the extraterritorial reach is real and being exercised.

The myth of the 250-employee exemption

Article 30(5) is the exemption that founders cite. The actual text limits it to organizations under 250 employees, AND processing that is “occasional,” AND processing that doesn’t include special categories of data (health, biometrics, ethnicity, etc.), AND processing that’s unlikely to result in risk to data subjects’ rights. All four conditions must be met.

For a typical modern startup running CRM systems, marketing automation, product analytics, ad retargeting, customer support tooling, and HR systems — the processing is continuous (not occasional), so the exemption doesn’t apply. Even when it does apply, it only exempts you from maintaining detailed Records of Processing Activities. It does not exempt you from any of the substantive obligations: lawful bases, user rights, breach notification, security measures, DPAs with processors, or compliant data transfers.

The lawful bases for processing personal data

Every act of processing personal data needs to rest on one of six lawful bases under Article 6. Picking the right one upfront determines what you can do with the data, what user rights apply, and how complex your compliance posture becomes.

  • Consent (Article 6(1)(a)). The user explicitly opts in. Required for non-essential cookies, marketing emails to non-customers, sharing data with third parties for advertising, and most AI training use cases. Consent must be freely given, specific, informed, and unambiguous — and must be as easy to withdraw as it was to give. Pre-ticked boxes don’t qualify. “By using this site you agree to everything” doesn’t qualify.
  • Contract (Article 6(1)(b)). Processing necessary to perform a contract with the user. The right basis for an ecommerce order — you need the buyer’s address to ship the product. The right basis for SaaS user accounts — you need their email to operate their account. Doesn’t extend to optional features (“we’d also like to use your data for marketing”) — those need separate consent.
  • Legal obligation (Article 6(1)(c)). Processing required by law. Tax records (you must retain), AML/KYC checks for regulated businesses, response to law enforcement requests with valid legal basis. Specific and narrow.
  • Vital interests (Article 6(1)(d)). Processing to protect someone’s life. Rarely relevant for most businesses; matters for healthcare, emergency services.
  • Public task (Article 6(1)(e)). Government and public-interest bodies, mostly. Not relevant for most private companies.
  • Legitimate interests (Article 6(1)(f)). The catch-all for processing where you have a real interest, the user’s rights aren’t overridden, and you’ve documented the balancing test. Common bases under this: fraud prevention, basic security logging, internal product analytics for service improvement, sending transactional service updates. Not a license to do whatever you want — you must document the balancing test (Legitimate Interests Assessment, or LIA) and be ready to show it on request.

The mistake most teams make: defaulting to consent for everything when contract or legitimate interests would be the cleaner basis. Over-relying on consent means every withdrawal of consent stops the processing — making operations brittle. The pattern that works: contract for what’s necessary to deliver the service, legitimate interests for fraud and security, consent only where it’s actually required (marketing, non-essential analytics, ad targeting, AI training).

The user rights you have to support, technically

Articles 12–22 of GDPR establish user rights. Each has a technical implementation cost that most teams underestimate:

  • Right of access (Article 15). Users can request a copy of their personal data plus information on what you do with it, who you share it with, and how long you keep it. Practical implementation: a self-service “download my data” feature in the user account, or a documented internal process that responds within one month. The data export must include data from every system that holds the user’s data — your app database, your CRM, your support tickets, your email logs.
  • Right to rectification (Article 16). Users can correct inaccurate data. Most apps already support this through profile editing. The gap: data in CRMs, marketing automation, and analytics that the user can’t directly edit — you need a process to update those when notified.
  • Right to erasure / “right to be forgotten” (Article 17). Users can request deletion. Not absolute — you can refuse if you have a legal obligation to retain the data, or if the data is necessary for ongoing contract performance. The implementation challenge: deleting from every backup, every analytics aggregation, every third-party processor. Most teams discover their data architecture wasn’t designed for cascade deletion.
  • Right to data portability (Article 20). Users can request their data in a machine-readable format (JSON, CSV) and have it transferred to another controller. Applies when processing is based on consent or contract, and when processing is automated. Functionally: an export endpoint or a documented process.
  • Right to restriction (Article 18). Users can request that you stop processing while a dispute is resolved. Technical implementation: a “freeze” flag in your user model that’s checked before any processing.
  • Right to object (Article 21). Users can object to processing based on legitimate interests, and absolutely to direct marketing. The marketing objection is unconditional — if a user says “stop marketing to me,” you must.
  • Rights related to automated decision-making (Article 22). Users have the right not to be subject to decisions based solely on automated processing that produces legal or similarly significant effects. Affects credit scoring, automated hiring screens, and increasingly AI-powered decisions in fintech, insurance, and employment.

The mature implementation: a “privacy dashboard” in the user account where users can see what data you have, export it, correct it, restrict processing, and delete it. The minimum implementation: a documented internal process and a privacy@ email address that responds within 30 days. The illegal implementation: ignoring requests, asking for excessive ID verification to discourage them, or charging fees (you can’t, except for manifestly unfounded or excessive requests).

Cookie consent and the banner reality in 2026

Cookie banners are where most companies’ GDPR posture is publicly visible — and increasingly, where regulators are starting their investigations. The 2026 enforcement reality:

  • The CNIL (France) has issued multi-million-euro fines against Google, Meta, Amazon, and others specifically over cookie banner practices — making “reject all” harder than “accept all,” pre-loading trackers before consent, ambiguous wording, dark patterns. The fines are large enough that boards take notice.
  • The EDPB has issued binding guidance on what constitutes valid consent. Highlights: “reject all” must be as easy as “accept all” (one click, same prominence), no pre-ticked boxes, granular controls (you can’t bundle marketing cookies with analytics cookies), withdrawal must be as easy as giving consent.
  • Implied consent is dead. “By continuing to browse this site you agree to our use of cookies” was already weak; in 2026 it’s actively risky. Consent must be active, not implied.
  • Google Consent Mode v2 is required for EEA measurement if you run Google Ads or use Google Analytics for ad personalization. Since March 2024, Google requires consent signals to flow through their tooling for EEA users, or your ad personalization and remarketing effectively stop working.

What a compliant cookie setup actually looks like

The technical pattern that actually works:

  1. Block all non-essential scripts at load time. No analytics, no Meta Pixel, no Hotjar, no third-party tracking until consent is recorded. The script tags should not load — not just the cookies they set.
  2. Show a banner with three equally prominent options: Accept all, Reject all, Customize. “Reject all” cannot be a tiny grey link below the button.
  3. Granular categories in the customize view: Strictly necessary (cannot be rejected), Analytics, Marketing/Advertising, Functional/Personalization. Each category should list the specific vendors and purposes.
  4. Record consent with a timestamp and version in your database, tied to the user (or session for anonymous visitors). You need to be able to prove later that consent was given.
  5. Provide a persistent way to change preferences. A “Cookie preferences” link in the footer that re-opens the banner. Users withdrawing consent must be as easy as giving it.
  6. Integrate with Google Consent Mode v2 if applicable. The consent signals must flow through to GTM and Google’s measurement APIs.

The tooling that handles this without custom development: Complianz (WordPress, deepest configuration), CookieYes (cloud-based, simpler), OneTrust (enterprise-grade, expensive), Iubenda (good for multi-language, includes generated privacy policies), Termly (US-friendly, EU-aware). Pick one, configure it properly, audit it quarterly. The DIY cookie banner is almost always non-compliant in subtle ways that a CMP would have caught.

Privacy by Design and data minimization

Article 25 requires “data protection by design and by default” — which is regulator-language for “don’t collect data you don’t need, don’t keep it longer than necessary, and don’t make it default-public.” In practice this affects how you build features:

  • Forms collect only what you need. Every additional field is data you have to protect, retain, and potentially delete on request. The “phone number” field you collect at signup but never use is a liability with no value. Audit your forms quarterly and remove fields that don’t have a documented purpose.
  • Default settings favor privacy. Profile visibility, marketing email opt-ins, third-party data sharing — these should default off, with users opting in. Article 25(2) is specific about this.
  • Retention periods are documented and enforced. “How long do we keep this data?” should have an answer for every data category, and the answer should be tied to a specific business or legal need. Old user accounts should be auto-deleted or anonymized after a defined inactivity period. Cron jobs that actually do this beat policies that exist on paper.
  • Pseudonymization where possible. Internal analytics on user behavior should generally use pseudonymous user IDs, not raw email addresses or names. The fewer systems that have the actual identity, the smaller your breach radius if something goes wrong.
  • Encryption at rest and in transit. TLS 1.2+ everywhere (and increasingly TLS 1.3), database encryption at rest, encrypted backups. Regulators treat encryption as the floor, not a feature — a breach involving unencrypted data is treated as more severe.

For SaaS products specifically, this also means: your data model should be tenant-isolated by design (no chance of one customer’s data leaking into another customer’s queries), your logging should not capture sensitive payloads (don’t log full request bodies that include PII), and your sub-processor list should be minimal and documented.

International data transfers post-Schrems II

This is the area where most US-based companies serving EU customers have the biggest exposure. The Schrems II ruling (CJEU, July 2020) invalidated the EU-US Privacy Shield, and while the EU-US Data Privacy Framework adopted in 2023 partially restored a path, the legal landscape remains complicated.

The 2026 reality:

  • Adequacy decisions cover transfers to a handful of countries (UK, Switzerland, Japan, Canada (commercial), Israel, others). Transfers to these countries are treated as if they stayed in the EU.
  • The EU-US Data Privacy Framework (DPF) allows transfers to certified US companies. The certification is voluntary; the company must be on the official DPF list. If your processors aren’t certified, this path doesn’t apply to them.
  • Standard Contractual Clauses (SCCs) are the workhorse mechanism for transfers to most other countries. Updated SCCs (June 2021) are required — if your DPAs reference the old SCCs, they need updating. SCCs alone aren’t enough — you also need a Transfer Impact Assessment (TIA) documenting that the destination country actually provides adequate protection in practice, and supplementary measures if it doesn’t.
  • Binding Corporate Rules (BCRs) are the enterprise solution for multinational groups. Expensive to set up; valid for a long time once approved.

For most SaaS startups, the practical pattern: certify under the DPF if you’re a US company processing EU data, use updated SCCs with non-DPF processors, run a Transfer Impact Assessment for any transfer outside the EU, and document everything. “We use AWS in the US” is fine if you’ve done the assessment and documented the supplementary measures (encryption, access controls, no-government-access guarantees from AWS); it’s not fine if you haven’t.

Data Processing Agreements (DPAs)

Article 28 requires a written contract whenever a controller (you) uses a processor (a third-party that processes personal data on your behalf). Every analytics vendor, every email service, every cloud host, every CRM — they’re all processors.

A valid DPA must include specific terms required by Article 28(3): the subject matter and duration of processing, the nature and purpose, the type of personal data, the categories of data subjects, the obligations of the processor, and so on. Most major vendors publish a standard DPA on their website (Stripe, Google, AWS, Vercel, HubSpot, Salesforce all have these). Sign them. Track them. The procurement question “can you provide your DPA with [vendor]?” comes up in nearly every enterprise sales cycle.

The trap most companies fall into: small vendors without published DPAs. The freelancer who does your email marketing, the boutique design tool, the small analytics platform — they often don’t have a standard DPA, and your team doesn’t push for one. Each one is a gap in your processor chain, and a question you can’t answer when an enterprise prospect’s procurement team asks. The fix: a template DPA your legal team has approved, that you require any vendor handling personal data to sign before onboarding.

AI processing and GDPR

The intersection of AI and GDPR has become the most scrutinized area of 2026 enforcement. Italian regulators temporarily banned ChatGPT in 2023 over GDPR concerns; the EDPB has issued guidance on training data sourcing; Article 22 on automated decision-making is being applied to AI-driven decisions; and the EU AI Act (which applies in stages through 2026) layers additional requirements on top of GDPR for AI systems.

The practical issues for any company integrating AI into their product or website:

  • Lawful basis for AI training. Training a model on user-generated content (chat messages, support tickets, documents) requires a lawful basis. Consent is the cleanest path; legitimate interests can work if you’ve documented the balancing test and offered an opt-out. “We may use your data to improve our service” buried in the terms of service is not a valid consent for AI training.
  • Sub-processor disclosures for LLM APIs. If you use OpenAI, Anthropic, Google Gemini, etc. as a backend for AI features, those providers are sub-processors of your customer’s data. You need DPAs in place, and you need to disclose them in your sub-processor list. The data residency question (“is data sent to US servers?”) is a frequent enterprise procurement question.
  • Right to erasure for training data. If a user requests erasure of their data and that data was used for model training, the question of whether you must retrain or remove the contribution is technically and legally unsettled. Most providers commit to not training on customer data by default for enterprise tiers; for consumer tiers, the situation is messier.
  • Automated decision-making (Article 22). If an AI system makes decisions with legal or significant effects (hiring screens, credit decisions, content moderation that affects livelihood), you need to provide meaningful information about the logic, give users the right to human review, and offer the ability to contest decisions. “The AI decided” is not a defense.
  • Special categories of data. AI models trained on or processing health, biometric, ethnic, religious, or sexual-orientation data require explicit consent or another Article 9(2) basis. Image classification systems that identify ethnicity, sentiment systems that infer mental state, voice analysis that infers emotion — all are processing special categories of data.

The pattern that works: clear opt-ins for AI features that process user data, documented data retention for AI training data, contractual no-training commitments from your LLM provider for enterprise customers, and human review pathways for any AI decision with real-world consequences.

Breach notification: the 72-hour clock

Article 33 requires notification to the supervisory authority within 72 hours of becoming aware of a personal data breach — unless the breach is unlikely to result in a risk to data subjects’ rights and freedoms. Article 34 requires notification to affected users when the risk is high. Both clocks start when you become aware of the breach, not when you decide to disclose.

The technical and operational requirements this implies:

  • Real intrusion detection. You can’t notify a breach you didn’t detect. Logging, alerting, and at least basic security monitoring are prerequisites — not optional luxuries.
  • An incident response plan that’s been rehearsed. Who declares an incident? Who investigates? Who decides whether to notify? Who actually files the notification? These shouldn’t be questions you’re answering for the first time at 2am on a Saturday.
  • Documentation of every breach — even ones that don’t require notification. Article 33(5) requires you to maintain records of all breaches, your assessment of risk, and the actions taken.
  • Communication templates ready. The notification to the supervisory authority has required content (Article 33(3)). The notification to users (Article 34) has required content. Drafting these from scratch under pressure produces worse results than templates reviewed in advance by counsel.

The 72-hour window is hard. Most breaches involve forensics that take longer than 72 hours to fully understand. The regulation accommodates this — you can submit an initial notification and update it as you learn more — but the initial notification must happen on time. Late notification is itself an Article 33 violation, separate from whatever caused the breach.

The DPO question

Article 37 requires a Data Protection Officer (DPO) in three cases: public authorities, organizations whose core activities involve large-scale systematic monitoring of data subjects, or organizations whose core activities involve large-scale processing of special categories of data. “Large-scale” is qualitative — not a hard headcount threshold — but for SaaS or ecommerce companies under a few hundred employees, the formal DPO requirement usually doesn’t apply.

That said, even when a formal DPO isn’t required, having a designated person responsible for data protection (a privacy lead, a compliance owner, a part-time external DPO) is valuable. Procurement questionnaires routinely ask “who is your DPO or privacy contact?” and “none” is not a great answer. External DPO services run $500–$5,000/month depending on the company size and complexity, and they’re a reasonable middle path until in-house headcount justifies a full-time hire.

FAQ

Does GDPR apply to my US-based SaaS if I have no EU office?

If you offer your service to people in the EU, or if you monitor their behavior (analytics, cookies, marketing), then yes. Recent enforcement against US-only companies confirms the regulators apply this extraterritorially. The practical risk: a complaint from an EU user can trigger investigation by the relevant supervisory authority, and you have no establishment-of-presence defense. The mature posture is to comply, regardless of where you’re headquartered.

What’s the actual fine risk?

The maximum fine is the higher of €20 million or 4% of global annual turnover — the second number is what scares boards of large companies. In practice, fines for small and mid-sized companies are typically in the tens to low hundreds of thousands of euros, with the most-fined offenses being inadequate security (Article 32) and failure to obtain valid consent. The bigger commercial risk for most startups isn’t the fine itself — it’s the procurement teams that won’t sign with you, the press cycle if you have a public breach, and the cost of post-hoc remediation when you have to retrofit compliance under pressure.

Is Google Analytics still GDPR-compliant?

It depends on configuration. Google Analytics 4 (GA4) with proper Consent Mode v2 implementation, IP anonymization, and disabled ad personalization can be configured compliantly for EEA users. Universal Analytics has been retired. The cleaner path for many EU-focused companies is server-side analytics or privacy-first tools (Plausible, Fathom, Matomo) that avoid the third-party data transfer question entirely. Several EU regulators have issued mixed guidance on Google Analytics use; the safe posture is configured-with-Consent-Mode or use an alternative.

Do I need a cookie banner if I only use “strictly necessary” cookies?

Technically, no — cookies that are strictly necessary for the requested service don’t require consent under the ePrivacy Directive. But “strictly necessary” is narrow: the session cookie that keeps the user logged in, the cart cookie that holds the cart contents, the security token. Almost any analytics or marketing tool falls outside this. If you only have strictly necessary cookies, you don’t need a banner — but you should still have a clear privacy policy explaining what cookies you use. If you have anything else (Google Analytics, Meta Pixel, Hotjar, Intercom widget that tracks pre-login users), you need a compliant banner.

How long should I keep user data?

As long as it’s necessary for the documented purpose, and no longer. Specific retention periods depend on the data type and the legal basis: tax records typically 7–10 years (legal obligation), customer order history as long as the business relationship plus a reasonable period for warranty/dispute purposes, marketing email lists until consent is withdrawn, anonymous analytics until the analytics window closes. The practical implementation: a written retention schedule per data category, automated deletion or anonymization jobs that enforce it, and audit logs proving the deletions ran.

Want help getting this right?

EtherLabz builds websites, ecommerce stores, and SaaS products with GDPR compliance baked into the architecture — not bolted on afterward. If you’re trying to ship a compliant build, retrofit an existing one, or pass an enterprise procurement security review, book a discovery call. We’ll tell you honestly what’s a real compliance gap and what’s procurement-team theater.

Written by Mike, with input from the EtherLabz team. This article is general information, not legal advice. Compliance decisions should involve qualified counsel familiar with your specific situation.