ChatGPT learns to think like humans, has become smart enough to get a seat in Harvard, Yale


As wonderful and impressive as ChatGPT has been, it has had significant gaps in its ability and knowledge. For example, it did poorly on mathematical and logic-based questions while being proficient enough to pass a Wharton MBA exam and the exam that would grant it a license to practice medicine in the United States.


Now though, ChatGPT is proficient enough to pass most reasoning and logical questions to get itself a set in some of the top Ivy League schools.

AI learns a human trick.
According to researchers, GPT-4, ChatGPT’s advanced AI model, has successfully acquired a form of intelligence known as ‘analogical reasoning,’ previously thought to be exclusive to humans. Analogical reasoning involves solving novel problems by drawing on experiences from similar past situations.

During a specific test that evaluates this type of reasoning, GPT-4 outperformed the average score of 40 university students in the AI language program.

The development of human-like thinking abilities in machines has garnered significant attention from experts. Dr. Geoffrey Hinton, a prominent figure in AI, has expressed concerns about the potential long-term risks of more intelligent entities surpassing human control.

Some issues persist, but for how long?
However, many other leading experts disagree and assert that artificial intelligence does not pose such a threat. A recent study emphasizes that GPT-4 still struggles with simple tests that young children can quickly solve.

Nevertheless, the language model displayed promising capabilities, performing on par with humans in tasks such as pattern detection in letter and word sequences, completing linked word lists, and identifying similarities in detailed stories. The most remarkable aspect was that it accomplished these tasks without specific training, appearing to utilize reasoning based on unrelated previous tests.

Professor Hongjing Lu, the senior author of the study from the University of California, Los Angeles (UCLA), expressed surprise that language-learning models, initially designed for word prediction, demonstrated such reasoning abilities.

GPT still relies on text to process problems
During the study, GPT-4 demonstrated its superiority over the average human in solving problems inspired by Raven’s Progressive Matrices. This test involves predicting the next image in complex arrangements of shapes. The figures were converted into a text format that GPT-4 could comprehend to make this possible.

Furthermore, GPT-4 outperformed school students in a series of tests that required completing word lists, where the first two words were related, such as ‘love’ and ‘hate,’ and it had to predict the fourth word, ‘poor,’ by identifying its opposite to the third word, ‘rich.’

Remarkably, GPT-4’s performance on these tests surpassed the average scores of students applying to university.

The study, published in the journal Nature Human Behaviour, explores whether GPT-4’s capabilities reflect mimicked human reasoning or has developed a fundamentally distinct form of machine intelligence.

Share your love
Facebook
Twitter
LinkedIn
WhatsApp

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

error: Unauthorized Content Copy Is Not Allowed