Data protection law already provides the tools to tackle intimate image abuse: it is time for those in power to act, says Jon Belcher
- AI-generated deepfakes of real people amount to personal data processing, bringing them squarely within the scope of UK data protection law.
- The Information Commissioner’s Office already has the powers to act against platforms enabling non-consensual image abuse—the issue is enforcement, not a lack of legislation.
Recent controversy about deepfake images generated by artificial intelligence (AI) has created the perception of a lack of legal protections for individuals, and led to a wider debate about how we regulate the ever-growing AI industry.
In fact, there are relevant laws already in place, but these have not yet proved effective to protect individuals. The UK’s data protection legislation could be used to tackle the issue, if the regulator and the courts were prepared to take decisive action.
Grok & the deepfake factories
Since purchasing the social media platform Twitter in 2022, which he later



