Sam Altman’s Monster: OpenAI scared of ChatGPT 4 which can now recognise and ‘read’ human faces


The recent advancements in OpenAI’s ChatGPT, specifically the introduction of GPT-4, have expanded its capabilities beyond text-based interactions. One notable addition is image analysis, which allows users to engage with the chatbot using visual content. With this update, users can describe images, ask questions about them, and even utilize facial recognition to identify specific individuals.


The potential applications of this technology are vast and promising. It can assist users in tasks like troubleshooting a malfunctioning car engine by analyzing images or identifying and providing information about a perplexing rash. These developments open new possibilities for leveraging AI to address image-related challenges and enhance problem-solving capabilities.

Jonathan Mosen, the CEO of a blind employment agency, has been an early adopter of the advanced version of ChatGPT. During a recent trip, he had the opportunity to explore the chatbot’s visual analysis feature. With the assistance of ChatGPT, Mosen could recognize and understand the contents of various dispensers in a hotel bathroom, surpassing the capabilities of conventional image analysis software.

Nevertheless, OpenAI is exercising caution when it comes to facial recognition. While the chatbot’s visual analysis feature can identify specific public figures, the company is fully aware of facial recognition technology’s ethical and legal concerns, particularly about privacy and consent. As a result, OpenAI has decided to discontinue providing information about individuals’ faces to Mosen.

Sandhini Agarwal, a policy researcher at OpenAI, expresses the company’s commitment to engaging in transparent discussions with the public regarding integrating visual analysis capabilities into the chatbot. OpenAI actively seeks feedback and democratic input from users to establish clear guidelines and implement safety measures. Furthermore, OpenAI’s nonprofit arm explores methods to involve the public in defining rules for AI systems, ensuring responsible and ethical practices.

The integration of visual analysis into ChatGPT is a natural progression, considering the model’s training data, consisting of text and images gathered from the internet. However, OpenAI recognizes the potential challenges that come with this development.

One such challenge is the possibility of “hallucinations,” where the system may generate misleading or incorrect information in response to images. For instance, when presented with a picture of an individual on the verge of fame, the chatbot might erroneously provide the name of a different notable figure.

As a significant investor in OpenAI, Microsoft also has access to the visual analysis tool and is conducting limited tests on their Bing chatbot. Both OpenAI and Microsoft are proceeding cautiously to protect user privacy and address concerns before considering a more comprehensive implementation of the visual analysis feature.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed