Thank You

Your are now register subscriber for our Rouse

AI vs. Privacy: German Court Rules Meta’s Legitimate Interest Prevails

Published on 27 Jun 2025 | 2 minute read
The case had sought to prevent Meta from using European citizens’ personal data to train its AI models.

In a recent decision from the Higher Regional Court of Cologne (Oberlandesgericht Köln, OLG), a request by Verbraucherzentrale NRW (NRW) – a non-profit organization focused on consumers rights – was rejected. The organization had sought to prevent Meta Platforms Ireland Limited (Meta) from using European citizens’ personal data to train its AI models.

The case arose after Meta announced its intention to begin using personal data from user’s public profiles as part of its AI model training. NRW filed for an interim injunction, arguing that such data processing violates the General Data Protection Regulation (GDPR).

However, after reviewing the case, the OLG concluded that Meta’s planned data use does not violate the GDPR or the Digital Markets Act (DMA). Instead, OLG found the processing to be lawful under Article 6.1 (f) of the GDPR, which allows data processing based on a “legitimate interest” – even without explicit consent from data subjects. The court accepted Meta’s claim that AI training constitutes such a legitimate interest and determined that no less intrusive alternative would be equally effective.  

Given the amount of data required for AI training, the court noted that anonymizing all the information would be practically impossible. By weighing the rights of data subjects against Meta’s interests, the court found that Meta’s interest in processing the data prevails.

The OLG based its decision in part on a 2024 opinion issued by the European Data Protection Board, which Meta referenced in support of its action. According to the ruling, only publicly accessible data – such as that indexed by search engines – will be used. Although the data includes information from third parties, minors, and potentially sensitive data under Article 9 of the GDPR, the court held that these factors do not outweigh the mitigating measures Meta has implemented.

Meta had already disclosed its plans in 2024, informing users through its apps and other available channels. The OLG emphasized that users have the option to prevent their data from being used by adjusting their privacy settings to make data non-public or by submitting an objection. The data Meta uses does not include directly identifying information such as names, email addresses, or postal addresses.

Additionally, the court found no violation of Article 5(2) of the DMA. In its preliminary legal assessment, the OLG stated that Meta is not merging data from various services or sources into a unified user profile as part of the planned procedure. It also noted that there is currently no relevant case law on this issue.

Key takeaways

  • Legitimate interest can justify AI training without consent, provided that there is a clearly defined legitimate interest and no less intrusive alternative.
  • Transparency and opt-out mechanisms can help support legal defensibility.
  • Even when large datasets include e.g. sensitive data or data from minors, processing may be permitted if risk-mitigation steps are in place and the processing purpose is considered valuable.

The decision may come as a surprise to some, considering the sensitivity of the data and the growing public concern surrounding AI and privacy. However, it marks a clear win for Meta – and potentially for other tech companies looking to develop and train AI systems using similar methods. What remains to be seen is how the long-term balance between technological innovation and the protection of personal privacy will evolve, and what consequences this ruling may have for future interpretations of data protection law in the context of AI.

30% Complete
Associate, Legal Counsel
+46 076 0107192
Associate, Legal Counsel
+46 076 0107192