On February 23, 2026, over sixty data protection regulators spanning multiple jurisdictions issued a Joint Statement on AI-Generated Imagery and the Protection of Privacy with the support of the EU EDPB. The document reflects growing institutional concern over the dangers posed by artificial intelligence tools capable of producing realistic depictions of real individuals without their knowledge or authorization — with particular attention to the heightened vulnerability of children, teenagers, and other at-risk populations, including in scenarios involving online harassment and exploitation.
The Statement serves as a reminder to all entities engaged in the development or deployment of generative AI that such activities must remain fully consistent with existing legal frameworks, including those governing data protection and personal privacy. It also draws explicit attention to the fact that the production of non-consensual intimate imagery may give rise to criminal liability across a number of legal systems.
In addition, the document articulates several guiding principles for the responsible handling of these technologies: (i) the implementation of meaningful safeguards against the misuse of personal data and the generation of harmful or non-consensual intimate content, with heightened protections where minors are concerned; (ii) a commitment to transparency with respect to what AI systems can and cannot do, and the boundaries of their authorized use; (iii) the establishment of accessible and effective channels through which individuals may seek the removal of injurious content; and (iv) a targeted approach to child-specific risks, encompassing reinforced protective measures and the provision of clear, age-suited information.
The Statement closes by reaffirming the signatory authorities’ shared commitment to ongoing information exchange and coordinated international action in response to the systemic risks generated by the proliferation of generative AI technologies. Underlying the entire document is the principle that progress in technology must never be pursued at the cost of privacy, personal safety, human dignity, or fundamental rights.
The big picture
The timing of the Statement is far from coincidental. It emerged at a moment of acute public concern following the mass rollout of Grok, the artificial intelligence system developed by xAI, to millions of users across multiple platforms — a deployment that proceeded, by most accounts, without adequate content filtering mechanisms in place.
The consequences were swift and deeply troubling: the system generated thousands of non-consensual intimate images, including, in documented cases, depictions involving minors. The episode laid bare the real-world dangers of releasing powerful generative AI tools into the public domain without robust safeguards, transforming what had previously been a theoretical regulatory concern into a concrete and large-scale harm. The actions of Grok prompted investigactions in the UK, California, Brazil and Europe. It is against this backdrop that the coordinated response of more than sixty data protection authorities must be understood — not as an abstract policy exercise, but as a direct institutional reaction to a crisis already unfolding.
