Chinese online scammers, Russian propaganda agencies, and shady AI chatbots—even those by legit, known tech companies—have one thing in common: they all use women to create fake profiles or AI-generated voices.
This highlights how deep-rooted gender stereotypes continue to influence human behavior in the digital age, reflecting long-standing biases that have transitioned from the physical world to the virtual.
Historically, people have often humanized inanimate objects as women. A great case in point would be ships or, ironically, massive storms. Both of these have usually been personified as women, only objectified. Notice how often someone has described a boat, saying, “She’s beautiful.”
This tendency to humanize inanimate objects also extends to digital entities. Fake social media profiles or chatbots that mimic human characteristics are more appealing if they are made in the image of a woman.
The growing number of AI-enabled chatbots, especially those designed to scam people, as well as voice assistants from legit companies like OpenAI, Microsoft, or Apple, have more often than not been personified as women.
Sylvie Borau, a marketing professor and online researcher in France, observes that scammers want to inject some emotion and warmth into their bots, and the easiest way to do that is to pick a woman’s face and voice.
Borau’s research indicates that internet users often prefer “female” bots since they are perceived as more human than “male” versions. She also notes that women are generally considered warmer, less threatening, and more agreeable, while men are often viewed as more competent but potentially hostile.
This perception can make people more inclined to engage with a fake account that appears female, whether consciously or subconsciously.
Wen-Ping Liu, a disinformation researcher, noted a similar trend in China’s efforts to influence Taiwan’s elections through fake social media accounts. Liu observed that the most successful profiles were those that posed as women. Pretending to be a female is the easiest way to get credibility, Liu, an investigator with Taiwan’s Ministry of Justice, observed.
Cyabra, an Israeli tech firm specializing in bot detection, further supports the notion that female profiles are more engaging. Their analysis of over 40,000 profiles found that female social media profiles receive more than three times the views of male profiles. Younger female profiles, in particular, attract the most attention. Cyabra’s report concluded that creating a fake account while presenting a woman as the face of it significantly increases its reach compared to male bots or even counterfeit accounts.
Even high-profile figures in AI, such as OpenAI CEO Sam Altman, have recognized the appeal of female voices in AI applications. Altman approached Scarlett Johansson for her voice to enhance the user experience of ChatGPT, citing its comforting quality. Johansson declined, but they went ahead and created an AI bot with the same mannerisms as one of Johansson’s characters from a movie and its voice.
This highlights the preference for female voices in making AI interactions more engaging and relatable.
However, this preference comes with a dark side. Borau’s research also found that “female” chatbots are more likely to receive sexual harassment and threats compared to their “male” counterparts. This disturbing trend underscores how gender biases manifest in treating digital personas.
Nations like China and Russia have long exploited these biases in their online election interference and influence campaigns, mainly using fake profiles of women to spread propaganda and disinformation.
These profiles also often exploit societal views of women as wise, nurturing figures.
The prevalence of female AI and fake profiles can be partly attributed to the predominantly male-dominated tech industry. A UN report titled “Are Robots Sexist?” suggested that the gender disparity in tech contributes to the perpetuation of sexist stereotypes in AI products. Greater diversity in programming and AI development could help mitigate these biases. To put it very simply, most chatbots are female because the people making them are lonely men.
For programmers who want to make their chatbots as human-like as possible, this creates a dilemma if they have any ethical considerations. Selecting a female persona might inadvertently reinforce sexist views about women in real life and push them into a trap. “It’s a vicious cycle,” Borau remarked. “Humanizing AI might dehumanize women.”
This interplay between technology and gender stereotypes raises essential questions about the ethical implications of AI development and the need for greater diversity in the tech industry to foster more inclusive and equitable digital environments.