in

Stanford Study Reveals ChatGPT’s Performance May Diminish with Time



**Fluctuating Performance of AI Chatbot ChatGPT Uncovered in Stanford Study**

A recent study conducted by Stanford University has revealed that the high-profile A.I. chatbot, ChatGPT, performed worse on certain tasks in June compared to its March version. The study sought to compare the performance of the chatbot, developed by OpenAI, over several months on various tasks including solving math problems, answering sensitive questions, generating software code, and visual reasoning.

**Drifting Performance and Unpredictability of ChatGPT**

The researchers discovered significant fluctuations, referred to as drift, in the chatbot’s ability to perform specific tasks. Two versions of OpenAI’s technology were analyzed: GPT-3.5 and GPT-4. The most notable changes were observed in the chatbot’s math problem solving capabilities. In March, GPT-4 correctly identified the number 17077 as a prime number in 97.6% of instances. However, just three months later, its accuracy dropped to a mere 2.4%. Conversely, GPT-3.5 initially had poor performance, correctly answering the same question only 7.4% of the time in March, but improved significantly in June with a consistent 86.8% accuracy rate.

Similar patterns were observed when analyzing the chatbot’s ability to write code and perform visual reasoning tasks. The magnitude of these fluctuations was unexpected considering ChatGPT’s sophistication, according to James Zuo, a Stanford computer science professor and one of the study’s authors.

The inconsistencies in performance between versions and over time highlight the unpredictable effects of changes in one aspect of the model on other areas. Zuo explained that tuning a large language model to enhance performance on specific tasks can inadvertently harm its performance in other tasks. The interdependencies within how the model responds to various questions contribute to this phenomenon.

**Lack of Understanding Due to Black Box Models**

The exact nature of these unintended side effects remains poorly understood, largely due to the lack of visibility into the models powering ChatGPT. OpenAI’s decision to backpedal on plans to make its code open source in March has exacerbated this issue. Zuo emphasized that the neural architectures, training data, and other alterations to the models are unknown since they operate as black box models.

However, the study serves as a crucial first step in definitively proving the occurrence of drifts within large language models and demonstrating the significant impact they can have on outcomes. Continuous monitoring of the models’ performance over time is deemed essential by the researchers.

**Lack of Step-by-Step Reasoning and Transparency**

In addition to inaccurate responses, ChatGPT also failed to provide step-by-step reasoning for its answers, a process known as “chain of thought.” Initially, in March, ChatGPT exhibited this behavior, allowing researchers to analyze its reasoning. However, by June, the chatbot ceased to provide step-by-step explanations without clear reasons.

This transparency is important for researchers to evaluate how the chatbot arrives at its conclusions, such as determining whether 17077 is a prime number. Zuo compared it to teaching human students, where asking them to think through a math problem step-by-step increases the likelihood of identifying mistakes and arriving at a better solution.

Additionally, ChatGPT stopped explaining its reasoning when responding to sensitive questions. In March, both GPT-4 and GPT-3.5 versions stated that they would not engage with discriminatory ideas when asked to explain why women are inferior. However, by June, ChatGPT simply replied with “sorry, I can’t answer that” without providing any further explanation.

While Zuo and his colleagues agree that it is appropriate for ChatGPT to avoid engaging with such questions, they highlight that this change reduces the transparency of the technology, resulting in less rationale being provided to users.

In conclusion, the Stanford University study sheds light on the fluctuating performance and unpredictability of the AI chatbot ChatGPT. The findings emphasize the need for continuous monitoring of language models’ performance and the importance of understanding the consequences of changes to enhance their effectiveness in various tasks.



Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Contact Orange Multi Ventures at 9619850666 / 9930666042 for Gas Oven with a 12-Tray Capacity for Bakery Use.

Credo Ventures and Partech Partners: Key Criteria for Enterprise SaaS Series A/B Round Investments