Placeholder canvas

Hackers can easily read what you say to ChatGPT, other AI services, finds study

A recent study by researchers at Israel’s Ben-Gurion University has illuminated significant privacy vulnerabilities inherent in several AI chatbots, raising concerns about the security of private conversations.

According to Yisroel Mirsky, head of the Offensive AI Research Lab at Ben-Gurion University, malicious actors can exploit these vulnerabilities to eavesdrop on chats on platforms like ChatGPT.

Mirsky highlighted that individuals sharing the same Wi-Fi or local area network (LAN) as the chat participants, or even remote malicious actors, can intercept and monitor conversations without detection.

The research report identifies these exploits as “side-channel attacks,” wherein third parties gather data passively through metadata or other indirect means rather than breaching security barriers.

Unlike traditional hacks that directly penetrate firewalls, side-channel attacks leverage weaknesses in encryption protocols. Despite encryption efforts by AI developers like OpenAI, Mirsky’s team discovered flaws in their encryption implementation, leaving message content susceptible to interception.

While side-channel attacks are generally less invasive, they pose significant risks, as demonstrated by the researchers’ ability to infer chat prompts with 55 percent accuracy. This susceptibility makes sensitive topics easily detectable by malicious actors.

Although the study primarily scrutinizes OpenAI’s encryption practices, it suggests that most chatbots, excluding Google’s Gemini, are susceptible to similar exploits.

Central to these vulnerabilities is the use of “tokens” by chatbots, which facilitate efficient communication between users and AI models. Although chatbot transmissions are typically encrypted, the tokens create a previously overlooked vulnerability.

Access to real-time token data enables malicious actors to infer conversation prompts, akin to overhearing a conversation through a closed door.

Mirsky’s team employed a second AI model to analyze raw data acquired through the side channel to substantiate their findings. Their experiments revealed a high success rate in predicting conversation prompts, underscoring the severity of the vulnerability.

Responding to these concerns, Microsoft assured users that personal details are unlikely to be compromised by the exploit affecting its Copilot AI. However, the company pledged to address the issue promptly with updates to safeguard customers.

The implications of these vulnerabilities are profound, particularly concerning sensitive topics such as abortion and LGBTQ issues, where privacy is paramount. Exploiting these vulnerabilities could have serious consequences, potentially endangering individuals seeking information on such topics.

As the debate surrounding AI ethics and privacy intensifies, these findings underscore the urgent need for robust security measures to protect users’ privacy in AI-driven interactions.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Newsletter

Follow Us

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed