ChatGPT has recently been the center of a lot of chats, both good and bad. Some are embracing it with open arms while some are worried about its negative impact on the artists and writers community among others.
Let’s delve a bit into ChatGPT before moving on to DetectGPT.
Consequences in Order
In the meantime, powerful Large Language Models (LLMs) like ChatGPT, PaLM, and GPT-3 have numerous beneficial applications; they can also be used as effective cheating tools for doing homework assignments or to generate convincing-but-inaccurate news blogs and articles.
It has been reported that they also often give out inaccurate information. Thus, the task of differentiating machine-generated from human-written text has become imperative in many domains. But as LLM outputs are becoming increasingly fluent with human-like features, this task tends to become increasingly difficult.
Stanford’s Stance
A Stanford University research team has recently addressed this issue in the new paper DetectGPT: Zero-Shot Machine-Generated Text Detection Using Probability Curvature, presenting DetectGPT which is a novel zero-shot machine-generated text detection approach that involves using a probability curvature to predict whether someone has presented a passage that was generated by a particular LLM.
The Stanford team has summarised their study’s main contributions in the following manner:
This work focuses on the task of zero-shot machine-generated text detection: given a sample text (“candidate passage”), the model learns to predict whether it was generated by a particular source LLM.
Delving Deeper Into Detect GPT
DetectGPT does not require human generated samples for training itself. Instead, it takes leverage of generic pretrained mask-filling models to highlight minor disturbances of the passage in question. Post that, DetectGPT operates on the premise that samples from a specific source model lie in areas of negative curvature of the log probability function of that source model.
In order to gauge the effectiveness of DetectGPT, the Stanford team collated it with existing zero-shot methods such as LogRank, Rank, and Entropy. They applied the XSum dataset for detecting fake news and Wikipedia paragraphs from SQuAD contexts along with Reddit WritingPrompts datasets for discerning in creative and academic writing.
In the experiments performed, DetectGPT exceeded the performance of the strongest zero-shot baseline by over 0.1 AUROC (area under the receiver operating characteristic, a classification performance metric) on XSum and also displayed a 0.05 AUROC refinement in SQuAD Wikipedia contexts. It also achieved performance competitively with supervised detection models trained on numerous samples.
Open AI Answers Back
Within just five days after the Stanford research paper was published, OpenAI introduced its own AI Text Classifier detection tool.
Reactions on these tools have been mixed recently. A UC Berkeley student Charis Zhang Tweeted recently, “There was a time where GPT generated texts were easily detectable, largely not the case today. Also, people don’t copy and paste directly from GPT without changing it up, which throws any AI detection model off completely.”
With the tug of war going on between ChatGPT amd DetectGPT among other detection devices, will the world be devoid of original art and content or will artistic originality still prevail?