EU set to criminalize AI-generated child sexual abuse and fake content


European Union EU antitrust regulators

The European Union is taking steps to criminalize the use of artificial intelligence (AI) to generate child sexual abuse images and fake content. The European Commission announced thatĀ AI-generated images and other forms of deepfakes depicting child sexual abuse (CSA) may be criminalized in the EU. There are plans to update existing legislation to keep pace with technological developments. The European block is poised to ensure that it becomes a criminal offence to produce such content.

child

AI-Generated Child Sexual Abuse Images

The EU believes that children are largely innocent and that society needs to protect them. The proliferation of AI-generated child sexual abuse images has raised serious concerns. The concerns are about the potential for these images to flood the internet. While existing laws in the U.S., U.K., and elsewhere consider most of these images illegal, law enforcement faces challenges in combatting them. The European Union is being urged to strengthen laws to make it easier to combat AI-generated abuse. The focus is to prevent the re-victimization of previous abuse victims.

Criminalizing AI-Generated Content

The EU is set to criminalize the sharing of AI-generated graphic images. This includes child sexual abuse images, revenge porn, and fake content. According to Politico, this plan will fully materialize into law by the middle of 2027. This decision comes in the wake of incidents such as the creation of fake AI-generated graphic images of a popular pop superstar, Taylor Swift, which were widely circulated on social media.

The EU has alsoĀ proposed making live streaming of child sexual abuse a new criminal offence. Also, possession and exchange of “paedophile manuals” will be criminalized under the plan. As part of the wider measures, the EU says it will aim to strengthen the prevention of CSA. The EU seeks to raiseĀ awareness of online risks and provide support to victims. It also wants to make it easier for victims to report crimes and perhaps offer financial compensation for verified cases of CSA.

Before submitting the proposal, the committee also conducted an impact assessment. It concluded that the increase in the number of children online and “the latest technological developments” created new opportunities for CSA to occur. Differences in member states’ legal frameworks may hinder action to combat abuse, so the proposal aims to encourage member states to invest more in “raising awareness” and “reducing the impunity that pervades the sexual abuse and exploitation of children online.”. The EU hopes to improve current ā€œlimitedā€ efforts to prevent CSA and assist victims.

children playing video games

Gizchina News of the week


EU prior CSA-related legislation

Back in May 2022, the EU tabled a separate draft of CSA-related legislation. It aims to establish a framework that would require digital services to use automated technology to detect and report existing or new child sexual abuse. It also wants these existing or new cases to be reported swiftly for relevant actions.

The CSAM (Child Sexual Abuse Material) scanning program has proven controversial. It continues to divide lawmakers in Parliament and the EU Council. This divide raises suspicions about the relationship between the European Commission and child safety technology lobbyists. Less than two years after the private information scanning scheme was proposed, concerns about the risks of deepfakes and AI-generated images have also risen sharply. This includes concerns that the technology could be misused to produce CSAMs. There are also concerns that fakeĀ content can make it more challenging for law enforcement to identify real victims. So the viral boom in AI-generated technology is prompting lawmakers to revisit the rules.

As with the CSAM scanning programme, co-legislators in the EU Parliament and Council will decide on the proposal’s final form. But today’s CSA crackdown proposal is likely to be far less divisive than the information-scanning plan. Therefore, this plan is more likely to be passed while another plan is still stalled.

Read Also:  Meta Unveils Its New AI-Powered Video Creation Tool!

According to the Commission, once agreement is reached on how to amend the current directive to combat CSA, it will enter into force 20 days after being published in the Official Journal of the European Union. By then, the bill will provide important guarantees for the prevention of child sexual abuse caused by AI and the protection of victims.

Legal Implications

The use of AI to produce child sexual abuse material has sparked debates about the legality of such actions. Recent cases, such as the arrest of an individual in Spain for using AI image software to generate “deep fake” child abuse material, have prompted discussions about the legal treatment of such material. Existing criminal laws against child pornography apply to AI-generated content. Efforts are being made to address the legal complexities surrounding the use of AI for nefarious purposes.

children

Challenges and Solutions

The widespread availability of AI tools has made it easier to create fake content, including child sexual abuse images. This has presented challenges for law enforcement and technology providers in combatting the proliferation of such content. Efforts are being made to develop technical solutions, such as training AI models to identify and block AI CSA images. Although these solutions come with their own set of challenges and potential harms.

Final Words

The EU’s decision to criminalize the use of AI to generate child sexual abuse images and fake content reflects the growing concern over the potential for AI to be misused for nefarious purposes. This decision marks a significant step towards addressing these issues. However, it also highlights the legal and technical challenges associated with combatting the proliferation of AI-generated abusive and fake content.

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous Google agrees to pay $350 million to settle class action lawsuit over Google+ user data leak
Next From Words to Images: Discover Bard's Groundbreaking Photo Generator