The 10 largest GDPR fines on Big Tech

The state of enforcement of the European Union’s flagship privacy regime, the General Data Protection Regulation (GDPR), on the most powerful tech giants remains a topic of ongoing debate. Below we’ve compiled a list of the 10 largest GDPR fines imposed on Big Tech since the regulation started to apply back in May 2018.

Meta, the owner of Facebook, Instagram and WhatsApp, tops the list, both for receiving the single biggest fine to date (€1.2 billion or around $1.31 billion at current exchange rates) and because it accounts for a majority of these largest penalties (six or more, depending on whether you count per platform).

Please note this list only includes major penalties issued to tech firms under the GDPR. In recent years, some significant sanctions have also been issued on Big Tech via the bloc’s older ePrivacy Directive, but you won’t find those listed here.

Penalties issued to tech firms under GDPR

1. Meta (Facebook): Fined €1.2 billion (~$1.31 billion) in May 2023 by the Irish Data Protection Commission (DPC) for violating the rules on transferring Facebook users’ personal data out of the European Union.

2. Amazon: Fined €746 million (~$815 million) in July 2021 by Luxembourg’s National Commission for Data Protection (CNPD) following complaints that its use of personal data for ad targeting was not based on consent.

3. Meta (Instagram): Fined €405 million (~$443 million) in September 2021 by Ireland’s DPC for failings in its handling of minors’ data.

4. Meta (Instagram and Facebook): Fined a total of €390 million (~$426 million) in January 2023 by Ireland’s DPC for failing to have a valid legal basis to process user data for ad targeting.

5. ByteDance (TikTok): Fined €345 million (~$377 million) in September 2023 by Ireland’s DPC for failings in its handling of minors’ data.

6. Meta (Facebook and Instagram): Fined €265 million (~$290 million) in November 2022 by Ireland’s DPC for breaches of data protection by default and design after certain platform features, including contact importer and search tools, made the personal data of hundreds of millions of users discoverable to all other users.

7. Meta (WhatsApp): Fined €225 million (~$246 million) in September 2021 by Ireland’s DPC for breaking GDPR transparency obligations and failing to make it clear to users how it processes their data.

8. Alphabet/Google (Android): Fined €50 million (~$55 million) in January 2019 by France’s National Commission on Informatics and Liberty (CNIL) for transparency and consent failings related to its Android mobile platform.

9. Meta (Facebook): Fined €17 million (~$18.5 million) in March 2022 by the Irish DPC for a string of security breaches thought to have affected up to 30 million users.

10. ByteDance (TikTok): Fined around €14.8 million at current exchange rates (~$16 million) in April 2023 by the U.K.’s Information Commissioner’s Office (ICO) in another case related to minor protection. (Note: Despite the U.K. no longer being in the EU, its data protection rules are still based on the GDPR.)

Not strictly Big Tech but worth a mention

Adtech giant Criteo was issued with a preliminary fine of €60 million (~$65 million) in August 2022 by France’s CNIL for a range of GDPR breaches. But in June 2023, the level of penalty was reduced to €40 million (~$44 million) after the adtech giant made representations. The enforcement followed complaints that Criteo did not have users’ consent for tracking and profiling them for ad targeting.

Another bonus mention: U.S.-based AI startup Clearview AI was fined the maximum possible (€20 million or around $22 million, based on its revenue) a full three times in 2022 by data protection authorities in Italy, Greece and France. The sanctions were for unlawful data processing as a result of its tactic of scraping selfies off the internet to train a facial-recognition ID-matching AI tool. In the same year, the U.K.’s ICO also hit it with a smaller sanction for GDPR breaches, so the controversial startup’s activities have drawn a lot of enforcement.

Read More

Spotify and Epic Games call Apple’s revised DMA compliance plan ‘confusing,’ ‘illegal’ and ‘unacceptable’

Count Spotify and Epic Games among the Apple critics who are not happy with the iPhone maker’s newly revised compliance plan for the European Union’s Digital Markets Act (DMA). Shortly after Apple announced the updated version on Tuesday, including loosened restrictions along with the addition of two more fees, Spotify shared a statement with TechCrunch calling the plan “unacceptable” and claiming Apple was once again disregarding “the fundamental requirements” of the DMA. Epic Games CEO Tim Sweeney, meanwhile, called the revisions another case of “malicious compliance” involving “junk fees.”

In the European Union where the new DMA law opens up app store competition, Apple continues its malicious compliance by imposing an illegal new 15% junk fee on users migrating to competing stores and monitor commerce on these competing stores.https://t.co/YUYwsnrh32 pic.twitter.com/xAWGkOWPrH— Tim Sweeney (@TimSweeneyEpic) August 8, 2024

The European Commission had already determined that Apple’s first attempt at DMA compliance had failed and was investigating the new fee structure proposed under Apple’s DMA rules, which included a new Core Technology Fee, for the privilege of using Apple’s technology to build mobile apps.

Under Apple’s new policy, proposed today, developers who want to link out to their websites from inside their iOS apps now don’t have to accept Apple’s DMA rules to do so. But those developers will still have to pay Apple, even if they no longer face the Core Technology Fee that comes with Apple’s new DMA rules. In its place, Apple added two new fees — an “Initial Acquisition Fee” and another “Store Services Fee.” The former is a commission of sorts for connecting users with the app through the App Store that applies during the first 12 months, while the latter helps to fund Apple’s App Store operations. It is charged on a 12-month fixed basis, meaning it would apply to those users who continue to make new purchases of digital goods and services through the app.

Both fees are being applied to developers who do accept Apple’s new DMA terms, too, adding on new charges on top of the Core Technology Fees for app installs.

The changes are confusing — so much so that even Spotify isn’t yet quite sure what to make of them, according to its statement.

However, the company still condemned the revisions based on its current understanding of how this new policy would work:

We are currently assessing Apple’s deliberately confusing proposal,” the company statement reads. “At first glance, by demanding as much as a 25% fee for basic communication with users, Apple once again blatantly disregards the fundamental requirements of the Digital Markets Act (DMA). The European Commission has made it clear that imposing recurring fees on basic elements like pricing and linking is unacceptable. We call on the Commission to expedite its investigation, implement daily fines and enforce the DMA.

Fortnite maker Epic Games, an Apple critic that had sued the app stores for antitrust issues, also called out the new revisions as being unlawful.

Wrote CEO Tim Sweeney in a post on X, “In the European Union where the new DMA law opens up app store competition, Apple continues its malicious compliance by imposing an illegal new 15% junk fee on users migrating to competing stores and monitor commerce on these competing stores,” he said.

It remains to be seen whether the EU will accept Apple’s proposed changes.

Read More

Apple revises DMA compliance for App Store link-outs, applying fewer restrictions and a new fee structure

Apple has further revised its compliance plan for the European Union’s Digital Markets Act (DMA) rulebook, which, since March, has forced it to give iOS developers more freedom over how they can distribute and promote their content on its mobile platform.

The iPhone maker is already under investigation by the European Commission for suspected noncompliance with the DMA — a market contestability regulation that allows for fines of up to 10% of global annual turnover for violations (or 20% for repeat offenses).

Back in June, the EU confirmed it would directly probe the fee structure of Apple’s new DMA business terms, following complaints the company was using “junk fees” to try to circumvent the bloc’s rules. So these new amendments to Apple’s DMA plan are taking place in the context of that ongoing enforcement. Apple also claims it’s been taking feedback from developers, which it says has informed the revisions to its compliance plan.

The new changes, which Apple has made available for developers to preview Thursday ahead of a planned consumer rollout this fall, concern the option it provides developers distributing apps in the EU that allows them to communicate external offers, such as by including links in their apps to redirect users to a website for making a purchase.

One big change Apple announced Thursday is that developers who include link-outs in their apps will no longer need to accept the newer version of its business terms — which requires they commit to paying the Core Technology Fee (CTF) the EU is investigating.

In another notable revision of approach, Apple is giving developers more flexibility around how they can communicate external offers and the types of offers they can promote through their iOS apps. Apple said developers will be able to inform users about offers available anywhere, not only on their own websites — such as through other apps and app marketplaces.

How developers inform users of these offers is also being freed up, with no more Apple templates requiring certain language to be used. Apple said it will allow multiple URLs for link-outs, including links with redirects and intermediate links. However, it stipulates developers must not use URL parameters for tracking and profiling users for ad targeting.

Still, developers will be allowed to use an actionable link — that is, a link-out that can be tapped, clicked, or scanned — to easily take users to their destination.

In another change, Apple is adding an option that will enable users to opt out of notifications it displays around link-outs, informing users they will be transacting outside the App Store if they purchase an external channel. However, Apple will continue to show these notifications by default unless users opt out.

Apple refers to the notifications as disclosure sheets but developers critical of its approach to DMA compliance have attacked them as “scare screens,” arguing they’re intended to pressure iOS users not to leave the App Store.

New fee structure for link-outs

Alongside the bundle of aforementioned changes giving developers more freedom to use link-outs to send their iOS users to external offers, the company is revising its fee structure for link-outs via some further unbundling: Two new fees will apply to purchases completed by users of iOS apps via link outs.

Apple is branding the first new fee an “Initial Acquisition Fee,” which it says reflects the value the App Store provides in connecting developers with customers in the EU. This fee will levy a 5% commission, both under Apple’s new business terms and its original terms.

The second fee is branded the “Store Services Fee,” which Apple says reflects the ongoing services and capabilities it provides to developers, such as app distribution and management; App Store trust and safety (including App Review); rediscovery, reengagement and promotional tools and services; and app analytics and insights.

This fee will be a 10% standard commission fee or a 5% discounted commission (e.g., for developers enrolled in the App Store’s small business program) under Apple’s new business terms; or 20% standard and 7% discount under Apple’s existing terms.

Apple says the new dual fee structure replaces the reduced commission it had applied in its new EU business terms (of either 10% or 17%).

The Initial Acquisition Fee will see Apple taking a commission of 5% on sales of digital goods and services made by a new app user on any platform for the first 12 months following an initial download from its App Store of an app with a link-out entitlement.

The fee won’t apply in the case of existing iOS app users; it will only apply to new users who download an app for the first time through the App Store, per Apple.

The Store Services Fee will see Apple taking a commission of 10% on the sales of digital goods and services made by app users on any platform that takes place within a fixed 12-month period from the date of any install, including app updates and reinstalls, in the case of an iOS app that has the entitlement profile to link out.

Apple says the fee will continue to be levied for existing iOS users who continue to receive installs of the app with link-out capabilities. But it says the fee will be reduced to 5% for the majority of developers enrolled in the App Store small business program or with subscriptions after their first year.

Apple suggests the revisions will mean developers on both its new and existing terms will pay lower rates for linking out to offers through the App Store — especially so in the case of existing users.

Read More

Breaking up Google would offer a chance to remodel the web

Just for a minute, as we digest the information that Google has been found to operate an illegal monopoly, can you imagine a web without Google? An internet without Google Search, Chrome, Gmail, Maps and so on would — very obviously — be a different place. But would such a change have implications for utility — or something else? Something bigger?

Alternatives to Google’s popular freemium products exist. You can use DuckDuckGo for search, for example, Brave to browse the web and Proton Mail for webmail to name a few of the non-Google options for key digital tools out there. There’s even a web beta of Apple Maps these days. Or — hey — why not switch straight to the community mapping open data project OpenStreetMap? All of these are also services that can be accessed for free, too.

What would be different in a web without Google is absolutely much bigger than mere utility.

The real issue here is about the business model underpinning service delivery. And the opportunity, if we can imagine for a minute a web that’s not dominated by Google, for different models of service delivery — ones that prioritize the interests of web users and the public infosphere — to achieve scale and thrive.

Such alternatives do already exist, as the list above shows. But on a web dominated by Google’s model of tracking-based advertising it’s extremely hard for pro-user approaches to thrive. That’s the real harm flowing from Google’s monopoly.

Google likes to paint its company “mission” as “organizing the world’s information and making it universally accessible and useful,” as its marketing puts it. But this grandiose claim has always been a fig-leaf atop a business model that makes vast amounts of money by organizing data — most especially information about people — so it can make money from microtargeted advertising.

Tracking web users’ activity feeds Google’s ability to profile the online population and profit from services related to selling highly targeted advertising. And it makes truly staggering amounts of money from this business: Alphabet, the brand Google devised almost a decade ago to pop a corporate wrapper around Google, reported full-year revenue of $307.39 billion for 2023. The vast majority of which is earned from ads.

Whether from pay-per-click ads displayed on Google search or YouTube; or through ads Google displays elsewhere on publishers’ websites; or other programmatic ad services it offers, including via its AdX exchange; or its mobile advertising platform for app developers; or through Google’s ad campaign management, marketing and analytics tools, that’s all revenue flowing to Google.

The simple truth is Google is making your information “useful” so it can feed Google’s bottom line because it’s in the advertising business. Put another way, its “mission” is chain-linked to a business model that’s based on tracking and profiling web users. Organizing the world’s information doesn’t sound so benign now does it?

Consider how Google’s incentives to structure data to mesh with its commercial priorities extend to making user-hostile changes to how it displays information. See, for example, endless dark pattern design tricks it’s used to make it harder for users of Google Search to distinguish between organic search results and ads.

Every confused user clicking an ad thinking it’s genuine information drives Google’s revenue engine. Useful to Google, obviously, but frustrating (at best) to web users trying to find a particular piece of information (tl;dr: your time being wasted is precious to Google’s profits).

Consider, also, a more recent example: Just last month Google was accused by Italy’s competition and consumer watchdog of “misleading and aggressive” commercial practices. Including providing users with “inadequate, incomplete and misleading information” (emphasis ours) about decisions they should be able to exercise — thanks to a variety of EU laws — over the company’s ability to track and profile them by denying its ability to link their activity across different Google-owned accounts.

Organizing this type of “information” — about the legal rights European users have to choose not to be tracked and profiled for Google’s profit — and making this info about how you can avoid being tracked “universally accessible and useful” does not appear to be a priority for Google, the adtech giant. Quite the opposite: Google stands accused of impeding users’ legal right to information that could help them protect themselves from Google’s surveillance. Oh.

Google’s market power

Google’s market power is linked to its ownership of so much information about user intention which flows from its dominance of online search.

Its market share of search in Europe is consistently above 90%. In the U.S., Google tends to hold a slightly lower but still dominant share. And — critically — on mobile it’s been able to ensure its search engine (or, from an ads perspective, its user intention data funnel) remains the default on Apple’s rival mobile platform because it pays the iPhone maker billions for the placement every year.

A New York Times report last fall suggested Google pays Apple $18 billion a year. During the antitrust trial Google also disclosed it shares a whopping 36% — more than a third! — of search ad revenue from Safari with Apple.

This is a core grievance of the U.S. antitrust ruling finding Google operates an illegal monopoly, as we reported earlier. By paying Apple to be the default search on iOS, the judge decided Google had blocked competitors from being able to build up their own search engines to a scale that would enable them to access enough data and reach to compete with Google Search.

Such placement is important to Google because Apple’s iOS holds a dominant share of the mobile device market in the U.S. versus Google’s own Android platform (where Google typically gets to set all its own services as the default). Add to that, iOS users are generally more valuable targets for advertisers — so being able to keep accessing information about iPhone users’ intentions is strategically important to Google’s ad business.

No surprise, then, that Google is willing to fork over such a major chunk of revenue to Apple so it can keep squatting on iOS as the default search choice. But buying this spot is also about shielding its tracking-based business model.

Because Google pays Apple so much, Apple has little incentive to develop its own search engine to rival Google’s — meaning web users have missed out on the chance to try a web search product made in Cupertino. Given Apple puts such a premium on marketing privacy as a core brand value, you could at least imagine an Apple-designed search engine would do things differently and wouldn’t have to concern itself with perpetuating the mass tracking and profiling of web users as Google Search does.

It’s true Apple does have an advertising business of its own. But the device maker is not, as Google is, also the owner and operator of core adtech infrastructure that’s been used to bake tracking and profiling into the mainstream web for decades.

Add to that, if other search engines had the chance to gain more users because Google didn’t own the default iOS placement, there would be an opportunity for pro-privacy competitors, such as DuckDuckGo, to get in front of more humans and build greater momentum for alternative non-tracking-based business models.

Instead, we have a web that’s locked to tracking as the default because it’s in Google’s business interests.

Google’s ownership of Chrome gives it another key piece of infrastructure. Google’s browser holds a majority share of the market worldwide (currently around 65% per Statista). Its Chromium browser engine also underpins multiple rival browsers — such as Microsoft’s Edge browser, for example — meaning even lots of rival browsers to Google’s Chrome still use an engine that’s developed by Google. And the decisions it makes about browser infrastructure determine the business models that can fly.

In recent years, Google has been working on reformulating its adtech stack under a project it dubbed “Privacy Sandbox.” The effort is intended to shift the current adtech model that Chrome supports from cookie-based microtargeting of web users, so individual-level tracking and profiling, to a new form of browser-level interest-based targeting that Google claims would be less bad for privacy.

We can debate whether Privacy Sandbox would actually be a positive evolution of the tracking ads business model — the technical solution Google has devised may, technically, be less harmful to individual privacy, if it ends the mass insecure sharing of data about web users that currently takes place via real-time programmatic ad auctions. But the alternative infrastructure it’s devised is still designed to allow targeted manipulation of web users at scale — just based on organizing browser users’ into interest-based buckets for targeting. Regardless, one thing is crystal clear: It’s Google’s dominance that’s driving decisions about the future of web business models.

Other mainstream browsers have already blocked tracking cookies. Google hasn’t, as yet, not only because of its commercial interests over the years but because its browser is also dominant. Which means all sorts of other players (publishers, advertisers, smaller adtechs etc.) are attached to the tracking data flows involved — dependent on Google’s infrastructure continuing to allow this spice through. This is why Google’s Privacy Sandbox has been closely supervised by regulators in Europe.

Principally, the U.K.’s Competition and Markets Authority (CMA) stepped in. In early 2022, it accepted a series of commitments on how Google would undertake the planned migration from tracking-cookie-based adtech to the reformulated interest-based targeting alternative, following complaints that the end of support for tracking cookies would be harmful to online publishers and advertisers reliant on the tracking ads business model.

What’s happened as a result of this close regulatory scrutiny led by a competition authority? Google’s timeline to deprecate cookies got delayed. And then, just last month, it announced it was abandoning the move — saying it was instead proposing that regulators accept an alternative whereby Chrome users would be shown some form of a choice screen. (Presumably this would let them decide whether to accept cookie-based tracking or choose Google’s interest-based alternative but Google hasn’t shared further details yet.)

Google’s self-interested approach to displaying information might be one reason not to trust the design of any such consent pop-up it devised. But the wider point here is that Google’s dominance of web infrastructure is so trenchant — the company’s model is so utterly baked into the mainstream web — that even Google can’t just make a change which might allow web users to get slightly more privacy. Because in flicking such levers the knock-on impact on other businesses that are dependent on its adtech infrastructure risks being a competition harm in itself.

An alternative approach

If there’s ever a definition of a company that got too big — so big it basically owns and operates the web — then surely it’s Google.

We can dream what a web without Google would look like. But it’s not easy to imagine, given how thoroughly it’s ingrained in web infrastructure. Not so much Mountain View as the whole mountain.

Writing in the wake of the Google antitrust decision, Matt Stoller, author of the antitrust-focused newsletter Big, has a go at imagining a post-Google web in the latest edition of his publication.

“I think there’s a vision tucked in an April speech by Federal Trade Commission consumer protection chief Sam Levine on how the internet didn’t have to become the cesspool that it is today,” Stoller writes. “He sketched out what the internet could become if well-regulated, a place where we have zones of privacy, where not everything operates like a casino, and where AI works for us. This [Google antitrust] case brings us a step closer to Levine’s vision, because it means that people who want to build better safer products now have the chance to compete.”

I think you can also see glimpses of the better web that’s possible in some of the great alternative products of our age. The private messaging provided by Signal, for example. Or the strongly encrypted email, calendar, collaborative documents and other privacy-safe productivity tools being developed by Proton. Though it’s notable that both have had to be structured as nonprofit foundations in a bid to ensure they can keep providing free access to pro-user products that don’t generate revenue by data-mining their users.

In an age of monopoly power driving wall-to-wall digital surveillance that unpleasant reality remains the mainstream web rule.

“I believe our digital economy can get better,” wrote Levine. “Not because our tech giants will voluntarily change their ways, or because markets will magically fix themselves. But because, at long last, there is momentum across government — state and federal, Republicans and Democrats — to push back against unchecked surveillance.”

The decision Monday by Judge Amit P. Mehta of the U.S. District Court for the District of Columbia to find Google a monopolist could be the first brick ripped out of the surveillance wall. If Google’s appeal fails, and remedies are imposed — just imagine! — a corporate break-up that forces the fig-leaf Alphabet to divest key Google infrastructure. Such an outcome could finally upend Google’s decades-long grip on web data flows and reboot the default model, setting this place free for users, startups and communities to reimagine and rebuild anew.

Read More

DSA vs. DMA: How Europe’s twin digital regulations are hitting Big Tech

It’s no accident that the European Union’s Digital Services Act and Digital Markets Act have such similar-sounding names: They were conceived together and, at the end of 2020, proposed in unison as a twin package of digital policy reforms. EU lawmakers had overwhelmingly approved them by mid-2022, and both regimes were fully up and running by early 2024. While each law aims to achieve distinct things, via its own set of differently applied rules, they are best understood as a joint response to Big Tech’s market power.

Key concerns driving lawmakers include a belief that major digital platforms have ignored consumer welfare in their rush to scale fatter profits online. The EU also sees dysfunctional digital markets as undermining the bloc’s competitiveness, thanks to phenomena like network effects and the power of big data to cement a winner-takes-all dynamic.

The argument is that this is both bad for competition and bad news for consumers who are vulnerable to exploitation when markets tip.

Broadly speaking, the DSA is concerned about rising risks for consumer welfare in an era of growing uptake of digital services. That could be from online distribution of illegal goods (fakes, dangerous stuff) on marketplaces or illegal content (CSAM, terrorism, etc.) on social media. But there are thornier issues, for example, with online disinformation: There may be civic risks (such as election interference), but how such content is handled (whether it’s taken down; made less visible; labeled, etc.) could have implications for fundamental rights like freedom of expression.

The bloc decided it needed an updated digital framework to tackle all these risks to ensure “a fair and open online platform environment” to underpin the next decades of online growth.

Their goal with the DSA is absolutely a balancing act, though: The bloc is aiming to drive up content moderation standards in a quasi-hands-off way: by regulating the processes and procedures involved in content-related decisions, rather than defining what can and can’t be put online. The aim is to harmonize and raise standards around governance decision-making processes, including by ensuring comms channels exist with relevant external experts in order to make platforms more responsible in moderating content.

There’s a further twist: While the DSA’s general rules apply to all sorts of digital apps and services, the strictest requirements — concerning algorithmic risk assessment and risk mitigation — only apply to a subset of the largest platforms. So the law has been designed to have the greatest impact on popular platforms, reflecting higher risks of harm flowing from stronger market power.

But when it comes to impact on Big Tech, the DMA is the real biggie: The mission of the DSA’s sister regulation is to drive market contestability itself. The EU wants this regulation to rebalance power at the very top of the tech industry pyramid. That’s why this regime is so highly targeted, applying to just over a handful of power players.

Laws with teeth big enough to bite Big Tech?

Another important thing to note is that both laws have sizable teeth. The EU has long had a range of rules that apply to online businesses but no other dedicated digital regulations are this flashy. The DSA contains penalties of up to 6% of global annual turnover for any infringements; the DMA allows for fines of up to 10% (or even 20% for repeat offenses). In some cases, that could mean billions of dollars in fines.

There’s a growing list of platform giants subject to the DSA’s strictest level of oversight, including major marketplaces like Amazon, Shein and Temu; dominant mobile app stores operated by Apple and Google; social networks giants, including Facebook, Instagram, LinkedIn, TikTok and X (Twitter); and, more recently, a handful of adult content sites that have also been designated as very large online platforms (VLOPs) after crossing the DSA usage threshold of 45 million or more monthly active users in the EU.

The European Commission directly oversees compliance with DSA rules for VLOPs, centralizing the rulebook’s enforcement on Big Tech inside the EU (versus enforcement of the DSA’s general rules being decentralized to member state-level authorities). This structure underlines that the bloc’s lawmakers are keen to avoid forum-shopping undermining its ability to enforce these rules on Big Tech as has happened with other major digital rulebooks (such as the GDPR).

The Commission’s early priorities for DSA enforcement fall into a few broad areas: illegal content risks; election security; child protection; and marketplace safety, though its investigations opened to date cover a wider range of issues.

Around 20 companies are in scope of the EU’s enforcement in relation to around two dozen platforms’ compliance with DSA rules for VLOPs. The Commission maintains a list of designated VLOPs and any actions it’s taken on each.

The DMA is also enforced centrally by the Commission. But this regime applies to far fewer tech giants: Just six companies were originally designated as “gatekeepers.” Back in May, European travel giant Booking was named the seventh.

The gatekeeper designation kicks in for tech giants with at least 45 million monthly EU end users and 10,000 annual business users. And, in a similar fashion as the DSA, the DMA applies rules to specific types of platforms (a similar number of platforms are in scope of each law, though the respective lists are not identical). The EU has some discretion on whether to designate particular platforms (e.g., Apple’s iMessage being let off the hook; same with Microsoft advertising and Edge browser. On the flip side, Apple’s iPadOS was added to the list of core platform services in April).

Regulated categories for the DMA cover strategic infrastructure where Big Tech platforms may be mediating other businesses’ access to consumers, including operating systems; messaging platforms; ad services; social networks; and various other types of intermediation.

The structure means there can be overlap of application between the DSA and the DMA. For example, Google Search is both a DSA VLOP (technically it’s a very large online search engine, or VLOSE, to use the correct acronym. But the EU also deploys VLOPSE to refer to both) and a DMA core platform service. The respective mobile apps stores of Apple and Google are also VLOPs and CPS. Such platforms face a double whammy of compliance requirements, though the EU would say this reflects its strategic importance to digital markets.

Shooting for a digital market reboot

Problems the EU wants the regulations to address by reshaping behavior in digital markets include reduced consumer choice (i.e., fewer and less innovative services), and higher costs (free services may still have expensive access costs, such as forcing a lack of privacy on users).

Online business models that do not pay proper attention to consumer welfare, such as ad-funded Big Tech platforms that seek to drive engagement through outrage/polarization, are another target. And making platform power more responsible and accountable is a unifying thread running through both regimes.

The EU thinks this is necessary to drive trust in online services and power future growth. Without fair and open competition online, the EU’s thesis is that not even startups can ride to the rescue of digital markets. It’s harder for startups to reach as many consumers as the dominant players, which means there’s a low chance that innovation alone will prevent/correct negative effects. Hence the bloc’s decision to lean into regulation.

The DSA and DMA take a different approach to Big Tech

So while the DSA aims to leverage the power of transparency to drive accountability on major platforms — such as by making it obligatory for VLOPs to publish an ad archive and provide data access to independent researchers so they can study the societal impacts of their algorithmic content-sorting — the DMA tries to have a more upfront effect by laying down rules on how gatekeepers can operate strategic services that are prone to becoming choke points under a winner-takes-all playbook.

The EU likes to refer to these rules as the DMA’s list of “dos and don’ts,” which boil down to a pretty specific set of operational requirements based on stuff the bloc’s enforcers have seen before, via earlier antitrust enforcements, such as the EU’s multiple cases against Google over the past two decades. It hopes these commandments will nip any repeat bad behaviors in the bud.

One of the dos on the list, however, is an important order that aims to force CPS to open up to third parties to try to stop gatekeepers using control of their dominant platforms to close down competition.

Changes announced by Apple earlier this year to iOS in the EU, to allow sideloading of apps through web distribution and third-party app stores, are a couple of examples of the DMA forcing more openness than was on offer through Big Tech’s standard playbook.

Another key DMA interoperability mandate applies to messaging platforms. This “do” will require Meta — so far the only designated gatekeeper to have messaging CPS, like WhatsApp and Messenger — to build infrastructure that will allow smaller platforms to offer ways for people to communicate with people using, say, WhatApp without the person needing to sign up for a WhatsApp account.

This requirement is in force but has yet to translate into new opportunities for messaging app consumers and competitors, given that the DMA allows for implementation periods for undertaking the necessary technical work. The EU has also allowed Meta more time to build the technical connectors. But policymakers are hoping that over time, the interoperability mandate for messaging will lead to a leveling of the playing field in this area because it would be empowering consumers to choose services based on innovation, rather than market forces.

The same competitive leveling goal applies across all CPS types the DMA regulates. The bloc’s big hope is that a set of operational commandments applied to the most powerful forces in tech will trigger a wide-ranging market reset that rekindles service innovation and supports consumer welfare. But the success or otherwise of that competitive reset mission remains to seen.

The regulation only started applying on gatekeepers in February 2024 (versus late August 2023 for the DSA rules on VLOPSEs). The real-world effects of the flagship digital market reform will be playing out for months and years yet.

That said, if anyone thought the DMA’s fixed “dos and don’ts” would be self-executing as soon as the law began to apply, then the Commission’s swift announcement (in March 2024) of a clutch of investigations for suspected noncompliance should have destroyed that. On certain issues, some gatekeepers are clearly digging in and preparing to fight.

List of DMA investigations opened to date

Apple: Since March, the EU has been looking into the compliance of Apple’s rules on steering developers in the App Store; the design of choice screens for alternatives to its Safari web browser; and whether its core technology fee (CTF) — a new charge introduced with the set of business terms that implement DMA entitlements — meets the bloc’s rules. The law doesn’t include a specific ban on gatekeepers charging fees, but they must abide by FRAND (fair, reasonable and nondiscriminatory) terms.

In June 2024, the Commission announced preliminary findings on the first two Apple probes and confirmed the formal CTF investigation. Its draft findings at that point included that Apple is breaching the DMA by not letting developers freely inform their users of alternative purchase opportunities. All these probes remain ongoing.

Alphabet/Google: The EU has also been investigating Alphabet’s rules on steering in Google Play, as well as self-preferencing in search results since March.

Meta: Meta’s “pay or consent” model also went under DMA investigation in March. Since November 2023, the tech giant has forced EU users of Facebook and Instagram to agree to being tracked and profiled for ad targeting in order to get free access to its social networks; otherwise, they would have to pay a monthly subscription to use the services. On July 1, the EU issued a preliminary finding that this binary choice Meta imposes breaches the DMA. The investigation is ongoing.

DSA: EU investigations on VLOPSE

On the DSA side, the Commission has been slower to open formal investigations, although it does now have multiple probes open.

By far its most used enforcement action is a power to ask platforms for more information about how they’re operating regulated services (known as a request for information, or RFI). This underpins the EU’s ability to monitor and assess compliance and build cases where it identifies grievances, explaining why the tool has been used repeatedly over the past 11 months since the compliance deadline for VLOPSEs.

X (Twitter): The first DSA investigation the EU opened was on X, back in December 2023. The formal proceeding concerned a raft of issues including suspected breaches of rules related to risk management; content moderation; dark patterns; advertising transparency; and data access for researchers. In July 2024 the Commission issued its first DSA preliminary findings, which concern aspects of its investigation of X.

One of the preliminary findings is that the design of the blue check on X is an illegal dark pattern under the DSA. A second preliminary finding is that X’s ad repository does not comply with the regulatory standard. A third preliminary finding is that X has failed to provide the requisite data access for researchers. X was given a chance to respond.

Other areas the EU continues investigating X for relate to the spread of illegal content; its handling of disinformation; and its Community Notes content moderation feature. So far it has yet to reach a preliminary view.

TikTok: In February 2024 the EU announced a DSA probe of video social network TikTok it said is focused on protection of minors; advertising transparency; data access for researchers; and the risk management of addictive design and harmful content.

AliExpress: In March 2024 the Commission opened its first DSA probe of an ecommerce marketplace, targeting AliExpress over suspected failings of risk management and mitigation; content moderation; its internal complaint-handling mechanisms; the transparency of advertising and recommender systems; and the traceability of traders and to data access for researchers.

Meta: In April 2024 the EU took aim at Meta’s social networks Facebook and Instagram, opening a formal DSA investigation for suspected breaches related to election integrity rules. Specifically it said it’s concerned about the tech giant’s moderation of political ads. It’s also concerned about Meta’s policies for moderating non-paid political content, suggesting they are opaque and overly restrictive.

The EU also said it would look into policies related to enabling outsiders to monitor elections. A further grievance it’s probing relates to Meta’s processes for letting users flag illegal content. EU enforcers are concerned these are not easy enough.

Penalties and impacts

So far, no DSA or DMA investigations have been formally concluded by the Commission, meaning that no penalties have been issued yet. But that’s likely to change as probes conclude in the coming months and years.

As with all EU regulations, it’s worth emphasizing that enforcement is a spectrum, not an event. Just the fact of oversight can apply pressure and lead to operational changes, ahead of any formal finding of noncompliance. Assessing impact based on headline penalties and sanctions alone would be a very crude way of trying to understand a regulation’s effect.

Notably, there have already been some big changes to how major platforms are operating in the EU — such as Apple being forced to allow sideloading or open up its Safari browser, or Google having to ask users to link data for ad targeting across CPS, to name a few early DMA-related developments.

But it’s also true that some major business model reforms have yet to happen.

Notably, Apple has so far stuck to its fee-based model for the App Store (by creating a new fee, the CTF, in a bid to work around the effect of being forced to open its App Store); and Meta has sought to cling to a privacy-hostile mode by forcing users to choose between being tracked or paying to use the historically free services, despite blowback from enforcers of multiple EU rules.

On the DSA side, the EU has been quick to trumpet a number of developments as early wins, such as crediting the DSA with helping drive improvements in platforms’ responsiveness to election security concerns ahead of the EU elections (also following its publication of detailed guidance and pre-election stress-testing exercises), or highlighting LinkedIn’s decision to disable certain types of ads data linking following a DSA complaint. Another example the EU points to in order to illustrate early impact is TikTok pulling functionality from the TikTok Lite app in the region over addiction concerns.

DMA effects the Commission may be less keen to own are claims by Apple and Meta that they’re delaying the launch of certain AI features in the EU, as they’re unsure how the DMA applies.

Read More

How the theft of 40M UK voter register records was entirely preventable

A cyberattack on the U.K. Electoral Commission that resulted in the data breach of voter register records on 40 million people was entirely preventable had the organization used basic security measures, according to the findings from a damning report by the U.K.’s data protection watchdog published this week.

The report published by the U.K.’s Information Commissioner’s Office on Monday blamed the Electoral Commission, which maintains copies of the U.K. register of citizens eligible to vote in elections, for a series of security failings that led to the mass theft of voter information beginning August 2021.

The Electoral Commission did not discover the compromise of its systems until more than a year later in October 2022 and took until August 2023 to publicly disclose the year-long data breach.

The Commission said at the time of public disclosure that the hackers broke into servers containing its email and stole, among other things, copies of the U.K. electoral registers. Those registers store information on voters who registered between 2014 and 2022, and include names, postal addresses, phone numbers and nonpublic voter information.

The U.K. government later attributed the intrusion to China, with senior officials warning that the stolen data could be used for “large-scale espionage and transnational repression of perceived dissidents and critics in the U.K.” China denied involvement in the breach.

The ICO issued its formal rebuke of the Electoral Commission on Monday for violating U.K. data protection laws, adding: “If the Electoral Commission had taken basic steps to protect its systems, such as effective security patching and password management, it is highly likely that this data breach would not have happened.” 

For its part, the Electoral Commission conceded in a brief statement following the report’s publication that “sufficient protections were not in place to prevent the cyber-attack on the Commission.” 

Until the ICO’s report, it wasn’t clear exactly what led to the compromise of tens of millions of U.K. voters’ information — or what could have been done differently.

Now we know that the ICO specifically blamed the Commission for not patching “known software vulnerabilities” in its email server, which was the initial point of intrusion for the hackers who made off with reams of voter data. The report also confirms a detail as reported by TechCrunch in 2023 that the Commission’s email was a self-hosted Microsoft Exchange server.

In its report, the ICO confirmed that at least two groups of malicious hackers broke into the Commission’s self-hosted Exchange server during 2021 and 2022 using a chain of three vulnerabilities collectively referred to as ProxyShell, which allowed the hackers to break in, take control, and plant malicious code on the server. 

Microsoft released patches for ProxyShell several months earlier in April and May 2021, but the Commission had not installed them.

By August 2021, U.S. cybersecurity agency CISA began sounding the alarm that malicious hackers were actively exploiting ProxyShell, at which point any organization that had an effective security patching process in place had already rolled out fixes months ago and were already protected. The Electoral Commission was not one of those organizations.

“The Electoral Commission did not have an appropriate patching regime in place at the time of the incident,” read the ICO’s report. “This failing is a basic measure.”

Among the other notable security issues discovered during the ICO’s investigation, the Electoral Commission allowed passwords that were “highly susceptible” to have been guessed, and that the Commission confirmed it was “aware” that parts of its infrastructure were out of date.

ICO deputy commissioner Stephen Bonner said in a statement on the ICO’s report and reprimand: “If the Electoral Commission had taken basic steps to protect its systems, such as effective security patching and password management, it is highly likely that this data breach would not have happened.” 

Why didn’t the ICO fine the Electoral Commission?

An entirely preventable cyberattack that exposed the personal data of 40 million U.K. voters might sound like a serious enough breach for the Electoral Commission to be penalized with a fine, not just a reprimand. Yet, the ICO has only issued a public dressing-down for the sloppy security. 

Public sector bodies have faced penalties for breaking data protection rules in the past. But in June 2022 under the prior conservative government, the ICO announced it would trial a revised approach to enforcement on public bodies. 

The regulator said the policy change meant public authorities would be unlikely to see large fines imposed for breaches for the next two years, even as the ICO suggested incidents would still be thoroughly investigated. But the sector was told to expect increased use of reprimands and other enforcement powers, rather than fines. 

In an open letter explaining the move at the time, information commissioner John Edwards wrote: “I am not convinced large fines on their own are as effective a deterrent within the public sector. They do not impact shareholders or individual directors in the same way as they do in the private sector but come directly from the budget for the provision of services. The impact of a public sector fine is also often visited upon the victims of the breach, in the form of reduced budgets for vital services, not the perpetrators. In effect, people affected by a breach get punished twice.”

At a glance, it might look like the Electoral Commission had the good fortune to discover its breach within the ICO’s two-year trial of a softer approach to sectoral enforcement.

In concert with the ICO saying it would test fewer sanctions for public sector data breaches, Edwards said the regulator would adopt a more proactive workflow of outreach to senior leaders at public authorities to try to raise standards and drive data protection compliance across government bodies through a harm-prevention approach.

However, when Edwards revealed the plan to test combining softer enforcement with proactive outreach, he conceded it would require effort at both ends, writing: “[W]e cannot do this on our own. There must be accountability to deliver these improvements on all sides.”

The Electoral Commission breach might therefore raise wider questions over the success of the ICO’s trial, including whether public sector authorities have held up their side of a bargain that was supposed to justify the softer enforcement. 

Certainly it does not appear that the Electoral Commission was adequately proactive in assessing breach risks in the early months of the ICO trial — that is, before it discovered the intrusion in October 2022. The ICO’s reprimand dubbing the Commission’s failure to patch known software flaw as a “basic measure,” for example, sounds like the definition of an avoidable data breach the regulator had said it wanted its public sector policy shift to purge. 

In this case, however, the ICO claims it did not apply the softer public sector enforcement policy in this case. 

Responding to questions about why it didn’t impose a penalty on the Electoral Commission, ICO spokeswoman Lucy Milburn told TechCrunch: “Following a thorough investigation, a fine was not considered for this case. Despite the number of people impacted, the personal data involved was limited to primarily names and addresses contained in the Electoral Register. Our investigation did not find any evidence that personal data was misused, or that any direct harm has been caused by this breach.”

“The Electoral Commission has now taken the necessary steps we would expect to improve its security in the aftermath, including implementing a plan to modernise their infrastructure, as well as password policy controls and multi-factor authentication for all users,” the spokesperson added. 

As the regulator tells it, no fine was issued because no data was misused, or rather, the ICO didn’t find any evidence of misuse. Merely exposing the information of 40 million voters did not meet the ICO’s bar. 

One might wonder how much of the regulator’s investigation was focused on figuring out how voter information might have been misused? 

Returning to the ICO’s public sector enforcement trial in late June, as the experiment approached the two-year mark, the regulator issued a statement saying it would review the policy before making a decision on the future of its sectoral approach in the fall. 

Whether the policy sticks or there’s a shift to fewer reprimands and more fines for public sector data breaches remains to be seen. Regardless, the Electoral Commission breach case shows the ICO is reluctant to sanction the public sector — unless exposing people’s data can be linked to demonstrable harm. 

It’s not clear how a regulatory approach that’s lax on deterrence by design will help drive up data protection standards across government.

Read More

FTC and Justice Department sue TikTok over alleged child privacy violations

The U.S. Federal Trade Commission and the Justice Department are suing TikTok and ByteDance, TikTok’s parent company, with violating the Children’s Online Privacy Protection Act (COPPA). The law requires digital platforms to notify and obtain parents’ consent before collecting and using personal data from children under the age of 13.

In a press release issued Friday, the FTC’s Bureau of Consumer Protection said that TikTok and ByteDance were “allegedly aware” of the need to comply with COPPA, yet spent “years” knowingly allowing millions of children under 13 on their platform. TikTok did so, the FTC alleges, even after settling with the FTC in 2019 over COPPA violations; as a part of that settlement, TikTok agreed to pay $5.7 million and implement steps to prevent kids under 13 from signing up.

“As of 2020, TikTok had a policy of maintaining accounts of children that it knew were under 13 unless the child made an explicit admission of age and other rigid conditions were met,” the FTC wrote in the press release. “TikTok human reviewers allegedly spent an average of only five to seven seconds reviewing each account to make their determination of whether the account belonged to a child.”

TikTok and ByteDance maintained and used underage users’ data, including data for ads targeting, even after employees raised concerns and TikTok reportedly changed its policy not to require an explicit admission of age, according to the FTC. More damningly, TikTok continued to allow users to sign up with third-party accounts, like Google and Instagram, without verifying that they were over 13, the FTC adds.

The FTC also found issue with TikTok Kids Mode, TikTok’s supposedly more COPPA-compliant mobile experience. Kids Mode collected “far more data” than needed, the FTC alleges, including info about users’ in-app activities and identifiers that TikTok used to build profiles (and shared with third parties) to try to prevent attrition.

When parents requested that their child’s accounts be deleted, TikTok made it difficult, the FTC said, and often failed to comply with those requests.

“TikTok knowingly and repeatedly violated kids’ privacy, threatening the safety of millions of children across the country,” FTC chair Lina Khan said in a statement. “The FTC will continue to use the full scope of its authorities to protect children online — especially as firms deploy increasingly sophisticated digital tools to surveil kids and profit from their data.”

TikTok had this to share with TechCrunch via email: “We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed. We are proud of our efforts to protect children, and we will continue to update and improve the platform. To that end, we offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screen time limits, Family Pairing, and additional privacy protections for minors.”

The FTC and Justice Department propose fining TikTok and ByteDance civil penalties up to $51,744 per violation per day and a permanent injunction to prevent future COPPA violations.

Read More