Japan struggles with gray zone of AI deepfakes exploiting children

4/27 Japan Times

Disturbingly realistic sexual images of children generated by artificial intelligence are spreading worldwide across social media and online forums — often based on real photos scraped from the internet or school yearbooks.

In Japan, these images fall into a legal gray zone that leaves those who have had their photos used as training data for such AI with no clear path to justice. That gray zone is becoming increasingly dangerous, experts warn, as AI tools are making it easier for anyone to create and share hyper-realistic images with just a few clicks.

“The current law was designed to protect real children, but generative AI has blurred the line between real and fake,” said Takashi Nagase, a lawyer and professor at Kanazawa University who helped draft internet policy at the Internal Affairs and Communications Ministry.

Under Japan’s law on child pornography, which took effect in 1999, the possession and distribution of child sexual abuse material (CSAM) is illegal, but the law applies only to depictions of real, identifiable children.

AI-generated content — including those known as “deepfakes” that are made using pictures of real people — is not explicitly covered under the current framework, nor are human-drawn illustrations that depict child abuse.

As a result, fictional images created using generative AI trained on photos of children often fall outside the law’s reach, unless the child depicted can be clearly identified.

This ambiguity is raising alarms among child protection advocates, as policymakers struggle to decide where to draw the line.

Fighting to stop it

One local government has taken matters into its own hands.

On April 1, a revised ordinance took effect in Tottori Prefecture explicitly banning people from creating or distributing AI-generated CSAM — even if it was created outside the prefecture — using photos of children living in the prefecture.

“We’ve established (with the ordinance) that AI-general deepfake pornography is not something that should be allowed,” said Tottori Gov. Shinji Hirai in a news conference on April 3, calling for the central government to draft a similar law.

While the ordinance does not mention any punishment for violators, which would be something for future discussions, the idea behind it is to raise awareness of the issue, Hirai added.

Without a national law, enforcement remains patchy and potentially limited by jurisdiction. Images kept on servers overseas or shared anonymously can be difficult to trace or remove, even when local ordinances apply.

Nonprofits are also increasing the pressure. ChildFund Japan, which has long campaigned for stronger child protection policies, began focusing on AI-generated abuse imagery following the global #MeToo movement and growing public support for modernizing Japan’s approach to CSAM.

In 2023, the group raised the issue in parliament, and has since hosted symposiums, launched a working group, and held discussions with lawmakers and tech platforms.

In a survey it released in March, 87.9% of the 1,200 people between the ages of 15 to 79 in Japan who responded said they want stricter legislation for banning AI-generated CSAM.

“There’s growing concern that generative AI isn’t being adequately addressed in Japanese media or law,” said Katsuhiko Takeda executive director of ChildFund Japan. “The law as it stands was not made from a child’s perspective. That has to change.”

One possible route currently open to victims is to file a defamation lawsuit. However, this puts the burden on the child and their guardians to notice and file a complaint if their image is misused — “a completely unrealistic expectation,” he said. Takeda said the deeper issue is one of awareness — both among lawmakers and the public, advocating for comprehensive legislation that also bans AI-generated images using real photos.

Asked during a Lower House Cabinet Committee meeting on April 9 whether existing legislation is sufficient to prosecute those who create or share such images, Masahiro Komura, state minister for justice, said AI-generated CSAM can be restricted under certain conditions.

Komura said if an image “shows the posture or appearance of a real child in a way that can be visually perceived,” it may qualify as CSAM — especially if the source material is identifiable.

Empowering children

Chief Cabinet Secretary Yoshimasa Hayashi said in the same parliamentary session that a cross-ministerial task force and a government expert panel are working to address legal and ethical questions surrounding generative AI and its misuse.

Other countries, meanwhile, have already moved ahead.

In February, the U.K. announced a new bill that will make it illegal to possess, create or distribute AI tools designed to create CSAM, with a punishment of up to five years in prison.

In the U.S., AI-generated CSAM is illegal under federal law, regardless of whether the victim exists.

Experts say Japan could benefit from studying these models — but legal reform alone isn’t enough. As AI tools become more accessible, there’s a growing consensus that education must play a central role in protecting children.

That includes teaching young people about the risks of sharing personal photos online, and integrating AI and media literacy into school curricula, which would empower them to protect themselves from evolving threats that the law has yet to catch up with.

“The generated image might be fictional — but the harm to real victims is not,” said Takeda. “That’s the line Japan needs to draw, and it needs to be drawn now.”

This entry was posted in Child Pornography, CSEC News. Bookmark the permalink.