skip to content

How easy is it to make deepfakes? Apparently, very

Deepfake has been a menace to the world, especially for quite some time now, but ever since AI went mainstream, the problem has become significantly worse.

Recently, the Australian government decided to put its foot down and took some decisive action against the proliferation of deepfakes and the people who use them to inappropriate content, targeting real people.

Australian Prime Minister Anthony Albanese confirmed plans to introduce legislation that bans and penalises the creation as well as distribution of deepfake pornographic material while taking note of how it is used to target women mainly and how it perpetuates violence against them. However, experts and social workers have warned that more significant issues are at play.

The Case That Sparked Outrage
Australia’s decision to ban deepfakes comes at the heel of a rather shocking and bizarre, as reported by the Guardian. By using some of the most popular AI platforms, some malicious actors created hyper-realistic yet entirely fabricated sexual images of a pair of twin Sisters, April and Amelia Maddison, without their knowledge or consent.

Although they weren’t the first victims of deep fakes in Australia by any means, what was shocking about their case was that soon after the depraved images were found, an AI model of them was found on the platform CivitAI that could be used to generate personalized sexual deep fake photos of the two sisters.

The case took Australia by storm. Shockingly, this model was trained using over 900 images scraped from the twin sister’s social media accounts and other online profiles. The ease with which individuals could access and manipulate these images highlights how difficult it is to police such models.

Similar models were also found that allowed users to create deepfaked images of other celebrities, influencers, and even commoners as long as they had enough images to train the AI models.

Beyond the apparent violation of privacy and dignity, such content can have lasting psychological and reputational effects on people. Moreover, the quickfire nature of deepfakes spreading across online platforms multiples the harm manifold.

Policing AI, AI Models and deep fakes
While several organizations and countries are trying to combat deepfake pornography, they aren’t moving at the pace with which the technology is developing. As observed in Australia’s case, even though they have an already existing framework that was very potent against revenge porn, it faces significant challenges in addressing deepfake porn, notes the Guardian report.

The eSafety Commissioner has also taken proactive measures, initiating proceedings against people who have been convicted of spreading pornography that has harmed real people or for their failure to remove intimate images from deepfake pornography websites.

However, the effectiveness of these legal actions hinges on international cooperation and the ability to enforce court orders across borders.

With each country trying to have its own rules for AI and deepfakes, and with several political institutions using deepfakes to their advantage in campaigns or social media outreach, legitimately or otherwise, coming to a joint resolution seems impossible for the time being.

Moreover, the evolving nature of deepfake technology presents ongoing challenges for law enforcement agencies and other authorities.

As AI algorithms become increasingly sophisticated, distinguishing between genuine and manipulated, which content becomes more difficult. This underscores the need for continuous innovation in detection and mitigation strategies to stay ahead of malicious actors.

Beyond Legislation: Safeguarding Digital Integrity
While legislative measures are crucial, it is becoming increasingly necessary to address the root cause of deepfake pornography requires a multifaceted approach. AI and tech companies must play a role in ensuring that their programmes are not used in a nefarious manner.

CivitAI, for instance, has implemented a “Real People Policy” to prevent the creation of suggestive or sexually explicit content featuring real individuals.

Nevertheless, the burden of protecting one’s digital likeness often falls on the individuals depicted in these deepfake images. Experts have observed that there is a need for improved mechanisms to swiftly remove such content.

As the Australian government moves forward with legislative reforms, it has to remain vigilant in tackling new, and more complex problems that surface from deepfakes and malicious actors.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed