In a recent investigative report, Forbes revealed that Social Links, a Russian spyware company previously banned from Meta’s platforms for alleged surveillance activities, has co-opted ChatGPT for spying on people using the internet.
This unsettling revelation of ChatGPT, which involves collecting and analyzing social media data to gauge users’ sentiments, adds another controversial dimension to ChatGPT’s use cases.
Presenting its unconventional utilization of ChatGPT at a security conference in Paris, Social Links showcased the chatbot’s proficiency in text summarization and analysis. By feeding data obtained through its proprietary tool related to online discussions about a recent controversy in Spain, the company demonstrated how ChatGPT could quickly process and categorize sentiments as positive, negative, or neutral. The results were then presented using an interactive graph.
Privacy advocates, however, find this development deeply troubling. Beyond the immediate concerns raised by this specific case, there is a broader worry about AI’s potential to amplify the surveillance industry’s capabilities.
Rory Mir, Associate Director of Community Organizing at the Electronic Frontier Foundation, expressed apprehension that AI could enable law enforcement to expand surveillance efforts, allowing smaller teams to monitor larger groups more efficiently.
Mir highlighted the existing practice of police agencies using fake profiles to infiltrate online communities, causing a chilling effect on online speech. With the integration of AI, Mir warned that tools like ChatGPT could facilitate quicker analysis of data collected during undercover operations, effectively enabling and escalating online surveillance.
A significant drawback noted by Mir is the track record of chatbots delivering inaccurate results. In high-stakes scenarios like law enforcement operations, relying on AI becomes precarious.
Mir emphasized that when AI influences critical decisions such as job applications or police attention, biases inherent in the training data—often sourced from platforms like Reddit and 4chan—become not just factors to consider but reasons to reconsider the use of AI in such contexts.
The opaque nature of AI training data, referred to as the “black box,” adds another layer of concern. Mir pointed out that biases from the underlying data, originating from platforms notorious for diverse and often extreme opinions, may manifest in the algorithm’s outputs, making its responses potentially untrustworthy.
The evolving landscape of AI applications in surveillance raises essential questions about ethics, biases, and the potential impact on individual freedoms and privacy.