top of page

AI, Deepfakes and CSAM. A Human Rights Emergency in the Digital Age.

Updated: Oct 10


As artificial intelligence rapidly evolves, it’s transforming not only how we communicate and create, but also how crimes are committed. Among the most chilling examples is the rise of AI-generated Child Sexual Abuse Material (CSAM) and deepfakes, often involving minors. These aren't just technological glitches or internet problems. They are, at their core, human rights violations.


AI and the Creation of CSAM


Child Sexual Abuse Material (CSAM) refers to any content that depicts sexually explicit activities involving minors. This material has long been a scourge on the internet, and unfortunately, AI has introduced new ways for perpetrators to generate and distribute this content. AI algorithms, especially those used in image generation and editing, have made it possible to create realistic and disturbing images of minors. This blurs the line between real and fabricated content, complicating the efforts of law enforcement agencies to track down the original abusers or victims.


🚨 The Right to Safety and Dignity


Understanding Children's Rights and the Impact of AI

Every child has a fundamental right to be protected from exploitation, abuse, and violence, both online and offline. Unfortunately, the use of artificial intelligence (AI) to create fake sexual images of children raises serious concerns.


The Consequences of Fake Content

Even though these images are artificially created and do not involve real children, the effects can be very real:

  • Damage to Reputations: Fake images can harm a child's reputation, affecting their relationships and social life.

  • Trauma: The existence of such content can cause emotional distress and trauma for the children involved.

  • New Forms of Abuse: This technology allows abusers to exploit children in new ways, often without being physically present.


It is crucial to recognize and address these issues to ensure that children are protected and their rights are upheld in our increasingly digital world.


🔍 Deepfakes and the Erosion of Consent


Deepfakes are AI-generated images, videos, or audio clips that mimic real people with an astonishing degree of accuracy. Originally, this technology was celebrated for its potential in the entertainment industry, think of recreating scenes with long-deceased actors or generating realistic voiceovers. However, deepfakes have rapidly become synonymous with malicious use, especially in the realm of non-consensual pornography.


AI-powered deepfake technology is also increasingly used to insert real people, sometimes minors into pornographic content without their knowledge or consent. This is a gross violation of their right to privacy, bodily autonomy, and freedom from defamation and harm. Victims often have no legal recourse, especially when content goes viral across jurisdictions.

The Stop CSAM Act aims to fill this gap by making platforms responsible for content they knowingly permit to circulate, as well as the new, AI-driven types of abuse they now face. However, legislation is just one part of the solution.


🛡️ Balancing Rights: Privacy ≠ Immunity


Some argue that increased content moderation or legal obligations for tech companies will infringe on privacy rights. But let’s be clear:

Privacy is a right. Protecting children from abuse is a duty.

The two are not in conflict, they are mutually reinforcing goals. Failing to act not only enables abusers but it violates the rights of survivors and this inaction sends a message that the internet is above the law, and that some rights (like free expression or corporate immunity) are more important than others (like child safety and dignity). That is a moral and legal imbalance we can no longer tolerate.




Why It Matters: Ethical and Legal Challenges


ree


The rise of AI technologies, like those creating child sexual abuse material (CSAM) and deepfakes, brings serious ethical issues that society is just starting to understand. While AI itself isn't bad, using it to make harmful content raises important questions about how we should create, share, and control these technologies.


Legal Challenges


From a legal standpoint, current laws are struggling to keep up with the fast changes in AI technology. Many countries do not have specific laws that deal with AI-generated CSAM or deepfakes, making it hard for law enforcement to take action against those who create such content. This situation creates a legal gray area that raises important questions about responsibility: who should be held accountable when AI is used to make illegal content? Is it the person who developed the AI program, the person who used it, or both?


Privacy Concerns

As deepfakes become more common, people are increasingly worried about their privacy and the possibility of becoming victims of these attacks. The absence of clear legal options for victims adds to this fear, highlighting the need for governments, technology companies, and legal experts to work together on effective solutions.


Lobbying for the STOP C.S.A.M Act on Capitol Hill with International Justice Mission.


ree

On September 10th 2025, over 200 individuals from 40 states championed the protection of children globally by lobbying for the STOP CSAM Act, aimed at holding tech companies more accountable in the battle against Child Sexual Abuse Material (CSAM) online.


We spoke with several representatives across New York ( Where I now reside), some passionate advocates for child protection, others primarily concerned about the implications for digital privacy and civil liberties. These conversations were illuminating. While there’s universal agreement that child sexual abuse material (CSAM) is a horrific and urgent issue, there’s also deep hesitation around any legislation that might erode user privacy.


That’s why the Stop CSAM Act is so important and so misunderstood. At its core, this bipartisan bill is designed to hold tech companies accountable when they knowingly host, distribute, or fail to act on CSAM found on their platforms. It provides survivors a legal pathway to seek justice, demands transparency from tech companies about their handling of CSAM, and makes clear that Section 230 protections don’t apply when a platform is complicit in this kind of abuse.



One key point I always emphasize in these discussions is this:


👉🏽 This bill does not compromise privacy.


End-to-end encryption, user confidentiality, and the general right to digital privacy are not threatened by this legislation. What is threatened is the unchecked ability of platforms to turn a blind eye to known abuse in the name of neutrality or profit.

Just as we are rightly concerned about protecting our private data and communications, we must also be concerned about the safety and dignity of children, who are often revictimized online, sometimes for years because platforms fail to act. Privacy and safety are not mutually exclusive , they can and must coexist.



The Stop CSAM Act is a necessary step toward that balance. It ensures that the digital world isn’t a lawless space when it comes to some of the most serious and devastating crimes. As a society, we cannot afford to treat privacy as sacred while allowing abuse to flourish in its shadow.


We have the tools and now, the legal framework to do better.


Team New York at Capitol Hill - 10th September 2025
Team New York at Capitol Hill - 10th September 2025


✊🏽 Conclusion: Technology Must Serve Justice, Not Bypass It

If we allow AI to become a tool of abuse and do nothing, we are complicit in a quiet but devastating human rights crisis. Laws like the Stop CSAM Act aren’t about censorship -they’re about justice, accountability, and the belief that every person, especially every child, has a right to grow up free from digital violence.



This is a test of our values in the age of AI. And the world is watching.


So, what are your thoughts on the implications of AI in crime? Join the conversation and share your perspectives on this compelling topic! 🕵️‍♂️🔍


Stay Informed, Stay Safe!



Have Insight? Contact Us and Make a Difference

Thank You for Reaching Out!

© 2023 The Justice Portal. All rights reserved.

bottom of page