Deepfakes and New Zealand's Law: An Evaluation of the Deepfake Digital Harm and Exploitation Bill

BY ESTHER WONG

***Trigger Warning: This article contains discussions of image based sexual abuse

Introduction

Who would have imagined that fictional depictions of robots taking over the world would become our reality? Artificial intelligence (AI) has seamlessly consumed our digital lives with its concerning creation of deepfakes. Although it can be used for creative purposes, it can also be weaponised by digitally misappropriating and harming a person’s likeness, for example, by generating synthetic pornographic material. This exact scenario prompted ACT MP, Laura McClure, to propose the Deepfake Digital Harm Exploitation Bill in hopes of combating New Zealand’s outdated legislation and addressing the rapid growth of AI.

What are deepfakes?

Deepfakes are the digital manipulation of a person’s face or body in a video, image or audio recording that is hyper-realistic. One of the most concerning uses of this technology is the non-consensual overlaying of real people onto pornographic material. In recent times, DeepMedia has estimated that by 2025, there will be 8 million deepfakes shared on social media, with the number of deepfakes doubling every six months. Furthermore, the European Commission estimates that pornographic content comprises 98% of deepfake content.

What is the Deepfake Digital Harm and Exploitation Bill?

This Deepfake Digital Harm and Exploitation Bill (the Bill) was introduced in May 2025 by ACT MP Laura McClure to restrict the generation, sharing and threatening to share of unauthorised sexually explicit deepfakes. It addresses the clear infringement of personal autonomy and self-ownership caused by the production and distribution of such content.

The harm caused by deepfakes has extended from public figures to ordinary people, including young people in New Zealand. Laura McClure emphasised that educators are already witnessing real harm, and that urgent legal regulation is required to prevent the normalisation of such abuse. 

Our current Crimes Act 1961 and Harmful Digital Communications Act 2015, which addresses revenge porn and intimate recordings, fail to account for the sharing of non-consensual deepfake pornography. As a result, to safeguard against detrimental psychological and reputational harm, the Bill seeks to criminalise such behaviour, provide clear pathways for victim redress and removal of damaging content.

What’s the difference between the bill and our current legislation?

The Crimes Act 1961

The Bill amends the Crimes Act 1961 to expand the definition of intimate visual recording. Sections 216G to 216N under the Crimes Act 1961 regulate the non-consensual making, possession and distribution of intimate visual recordings. Intimate visual recording is defined as a recording (a photograph, videotape, or digital image) without the consent or awareness of the person when naked, or engaged in sexual activity or other personal bodily acts such as showering or toileting. The provisions were drafted for real recordings of real individuals without consent; therefore, this definition does not include deepfake pornography despite its depiction of a real person in an intimate or sexual context.

The Deepfake Digital Harm and Exploitation Bill addresses this gap by expanding the definition of intimate visual recording in section 216G to include digitally fabricated images or videos. The Bill adds a new subsection (1A), which clarifies that an intimate visual recording encompasses recordings that have been created, synthesised, or altered without the individual’s authorisation. In section 216N, the Bill also inserts a new definition of “subject” where “anyone” means an “individual who is, or appears to be, featured or depicted in the recording”. This amendment provides the same legal protection as those whose genuine material has been shared without consent and imposes criminal liability on those who digitally create or distribute deepfakes.

The Harmful Digital Communications Act 2015

The Harmful Digital Communications Act 2015 was enacted to prevent and redress digital harm such as harassment, bullying and the sharing of intimate images. Section 22A makes it an offence to post a digital communication that is an intimate visual recording of a victim. However, the definition of “intimate visual recording” is linked to the Crimes Act’s existing meaning, which is restricted to only real recordings. As a result, the Harmful Digital Communications Act does not protect victims whose intimate image has been digitally manipulated or generated. This means victims of deepfakes cannot pursue criminal charges under this Act unless they can prove the material was genuine and caused “serious emotional distress”.

The Deepfake Digital Harm and Exploitation Bill addresses this loophole by amending Section 4 of the Harmful Digital Communications Act to align with the new definition in the Crimes Act as proposed by the Bill. The Bill incorporates a clause that extends intimate visual recordings to “images that are created, synthesised, or altered to appear to be intimate visual images”. This incorporation enables consistent terminology across both Acts. It ensures that the offence under section 22A applies to generated images, allowing for victims to seek court-ordered takedowns, suppression orders and prosecutions.

Is this enough?

Experts say the government needs to take more steps to mitigate the risks associated with AI. An open letter signed by more than 20 AI specialists is calling for the government to improve the regulations governing the new technology. According to the research of Victoria University AI senior lecturer Dr Andrew Lensen, approximately 81% of New Zealanders want regulation of AI, and 66% of New Zealanders are fearful of AI. He says the Bill was a good initial step; however, he urges the government to take further action, such as establishing a national AI oversight body. He believes that we should not ban AI but rather engage in a broader conversation about how to regulate it.

Where to seek help?

If a sexually explicit photo of you has been shared, you can contact StopNCII.org.

If you are under 18 and your intimate image has been shared, you can use a free tool called Take It Down to help delete and prevent the sharing of your image.

Conclusion

The Bill is only the beginning of bridging the gap between law and technology. Extending the definition of intimate visual recording in both the Crimes Act 1961 and the Harmful Digital Communications Act 2015 acknowledges the risks of deepfakes and that digitally fabricated intimate images cause real harm. However, New Zealand must continue to develop stronger safeguards to keep pace with the rapid growth of AI.


The views expressed in the posts and comments of this blog do not necessarily reflect those of the Equal Justice Project. They should be understood as the personal opinions of the author. No information on this blog will be understood as official. The Equal Justice Project makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. The Equal Justice Project will not be liable for any errors or omissions in this information nor for the availability of this information.