Placeholder canvas

Over 75% of Indians exposed to deepfake videos in the last 12 months, only fraction realised AI trickery

Online security and cybersecurity experts at McAfee have shared some startling findings from a recent survey that shed light on the widespread encounter with deepfake content among Indians.

In January and February of this year, McAfee conducted a research study in several countries to examine the impact of artificial intelligence and technology on the future. MSI-ACI surveyed 7,000 consumers from various countries, including the US, UK, France, Germany, Australia, India, and Japan.

Among the respondents, nearly one in four Indians (22 percent) admitted to encountering a political deepfake that they initially believed to be genuine.

This prevalence of deepfake exposure raises concerns, especially with ongoing elections and sporting events in India, where distinguishing between real and fake content becomes increasingly challenging due to the sophistication of AI technologies.

Some of the other key findings from the survey:

  • Thirty-one percent of respondents listed the influence of fake news on general elections as one of the most concerning issues related to AI-powered technology.
  • Misinformation and disinformation emerged as significant concerns, with notable incidents involving public figures like Sachin Tendulkar, Virat Kohli, Aamir Khan, and Ranveer Singh as examples.
  • Respondents expressed concerns about various uses of deepfakes, including cyberbullying (55 percent), creating fake pornographic content (52 percent), facilitating scams (49 percent), impersonating public figures (44 percent), undermining public trust in the media (37 percent), influencing elections (31 percent), and distorting historical facts (27 percent).
  • Eighty percent of respondents stated that they are more concerned about deepfakes than they were a year ago, reflecting growing awareness of the potential risks of this technology.
  • Sixty-four percent of respondents believe that AI has made it harder to spot online scams, highlighting the challenges of deepfake technology in detecting fraudulent activities.
  • Only 30 percent of respondents feel confident in their ability to distinguish between real and fake content generated by AI, indicating a widespread lack of awareness and preparedness.
  • In the past 12 months, 38 percent of respondents encountered a deepfake scam, with 18 percent falling victim to such scams. These scams often involve impersonating celebrities or cloning individuals’ voices to deceive others into parting with personal information or money.

    How to stay safe
    In today’s digital age, verifying information before sharing it is crucial, especially with the rise of deepfakes and AI-generated content. Be cautious when encountering distorted images, robotic voices, or emotionally charged content, as these can often signal fake news.

 

Before spreading information, ensure it is true and accurate using reliable fact-checking tools and trusted news sources. Watch out for manipulated images, which often contain imperfections like extra fingers or blurry faces. Pay attention to the voices in videos, as AI-generated ones may have awkward pauses or unnatural emphasis.

McAfee has developed Project Mockingbird, an AI-powered fake audio detection technology, to combat the increasing threat of cybercriminals using AI-generated audio for scams and manipulation. This innovative technology, created by McAfee Labs, helps users identify AI-generated audio in videos, providing a better understanding of the digital world and the potential manipulation of content.

Please be careful with the emotionally charged content, especially if it incites extreme emotions like anger or sadness. Similar to phishing emails, fake news aims to manipulate your thoughts and actions without careful consideration.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed