Exposing the illegal trade in images of child sex abuse

a young child's silhouette against a vibrant background

The BBC has discovered that pedophiles are using artificial intelligence (AI) technology to produce and market lifelike child sexual abuse material.    .

Some people are getting access to the images by subscribing to accounts on popular content-sharing websites like Patreon.

According to Patreon, there is "zero tolerance" for such imagery on its website.

The National Police Chiefs Council deemed it "outrageous" that some platforms were making "huge profits" while abdicating their moral obligations.

The creators of the abuse images are employing artificial intelligence (AI) software called Stable Diffusion, which was created to produce images for use in graphic design or art.

Computers can now carry out tasks that would normally require human intelligence thanks to AI.

The Stable Diffusion software enables users to describe any image they desire using word prompts, and the program will then produce the desired image.

However, the BBC has discovered that it is being used to produce realistic images of child sexual abuse, including the rape of infants and young children.

Teams from the UK police investigating online child abuse claim to have already run into such material.

Octavia Sheepshanks
There has been a "huge flood," according to journalist Octavia Sheepshanks, of AI-generated images.

Octavia Sheepshanks, a freelance journalist and researcher, has been looking into this matter for a while. She alerted the BBC to her findings by way of the children's charity NSPCC.

Since AI-generated images are now feasible, she claimed, "there has been this huge flood... It's not just very young girls, [paedophiles] are talking about toddlers.".

It is unlawful to own, publish, or transfer a "pseudo image" depicting child sexual abuse in the UK because it is treated the same as an actual image.

According to Ian Critchley, the National Police Chiefs' Council's (NPCC) lead on child safeguarding, it would be incorrect to claim that because no actual children were shown in such "synthetic" images, no one was harmed.

A pedophile could "move along that scale of offending from thought to synthetic, to actually the abuse of a live child," the author cautioned.

There are three steps involved in the sharing of abuse images.

  • AI software is used by pedophiles to create images.
  • On websites like the Japanese photo-sharing service Pixiv, they advertise images.
  • Customers can pay to view these accounts' more explicit images by following links on these accounts on websites like Patreon.

Some of the image makers are publishing on Pixiv, a well-liked Japanese social media site used primarily by creators of manga and anime.

The site's creators use it to promote their work in groups and via hashtags, which index topics using key words, but because it is hosted in Japan, where it is legal to share sexualized cartoons and drawings of children, they do so.

According to a Pixiv spokesman, the company is very focused on finding a solution to this problem. On May 31, it declared that it had outlawed all photostylized representations of sexual content involving minors.

The business claimed that it had proactively strengthened its monitoring systems and was devoting significant resources to addressing issues brought on by advancements in AI.

According to Ms. Sheepshanks' research, users appeared to be producing child abuse images on a large scale, she told the BBC.

Because of the sheer volume, she explained, "people [creators] will say "we aim to do at least 1,000 images a month.".

Users' comments on specific Pixiv images make it clear that they have a sexual interest in children; some even offer to provide real-world images and videos of abuse.

Some of the groups on the platform have been under the scrutiny of Ms. Sheepshanks.

"People will share, 'Oh here's a link to real stuff,' in those groups, which will have 100 members, she says.

"The worst things, I didn't even know such words [descriptions] existed. ".

Many Pixiv accounts have links in their bios directing viewers to what they refer to as their "uncensored content" on the US-based content-sharing platform Patreon.

Patreon claims to have more than 250,000 creators, the majority of whom are real accounts belonging to well-known writers, journalists, and celebrities. Patreon is estimated to be worth $4 billion (£3 point 1 billion).

By subscribing to blogs, podcasts, videos, and images on a monthly basis for as little as $3.85 (£3), fans can support creators.

But during our investigation, we discovered Patreon accounts that were selling photo-realistic, AI-generated, and offensive images of children at various price points depending on the kind of content requested.

"I train my girls on my PC," one wrote on his account, adding that they exhibit "submission.". Another user offered "exclusive uncensored art" for $8.30 (£6.50) per month.

One example that the BBC sent to Patreon was "semi realistic and violates our policies," the platform acknowledged. The account was allegedly deleted right away.

Insisting that "Creators cannot fund content dedicated to sexual themes involving minors," Patreon claimed to have a "zero-tolerance" policy. ".

The company added that it had "identified and removed increasing amounts" of this material, calling the rise of harmful AI-generated content on the internet "real and distressing.".

The company claimed to be "very proactive," with teams that devote their time, technology, and partnerships to "keep teens safe," and that it already forbids artificial intelligence-generated synthetic child exploitation material.    .

The NPCC's Ian Critchley
Ian Critchley of the NPCC called it a "pivotal moment" for society.

A global collaboration between academics and several companies, led by the UK company Stability AI, resulted in the development of the AI image generator Stable Diffusion.

There have been several releases, and the code has been restricted to limit the types of content that can be created.

However, a previous "open source" version that allowed users to disable any filters and program it to create any image—including illicit ones—was made available to the public last year.

According to Stability AI, "our policies are clear that this includes CSAM (child sexual abuse material), and we prohibit any misuse for illegal or immoral purposes across our platforms.".

"We firmly support law enforcement efforts against those who abuse our products for unlawful or evil purposes," the statement reads.

Questions have been raised about the potential risks AI may one day pose to people's safety, privacy, or human rights as it continues to develop quickly.

Ian Critchley of the NPCC expressed concern that the influx of realistic AI or "synthetic" images might make it more difficult to find actual abuse victims.

He explains: "As opposed to an artificial or synthetic child, it creates additional demand in terms of policing and law enforcement to identify where an actual child, wherever it is in the world, is being abused. ".

Mr. Critchley asserted that he thought society was at a turning point at the time.

We can make sure that young people can take advantage of the amazing opportunities that the internet and technology provide, or it could devolve into a place that is much more dangerous, the speaker warned.

Source link

You've successfully subscribed to NewsNow
Great! Next, complete checkout to get full access to all premium content.
Welcome back! You've successfully signed in.
Unable to sign you in. Please try again.
Success! Your account is fully activated, you now have access to all content.
Error! Stripe checkout failed.
Success! Your billing info is updated.
Billing info update failed.