OpenAI kills off own AI detector after it fails at its job
OpenAIThe makers of ChatGPT, OpenAI have ended support for its own AI detector after it routinely failed to provide accurate results.
ChatGPT’s developers, OpenAI, have shut down its own AI writing detector after it continously failed to function properly.
OpenAI launched the classifier in January, and quietly ended support for it on July 20. Due to the nature of the annoucement, its only just beginning to surface now.
OpenAI’s detector was notorious for being bad at detecting AI written content. At launch, the firm admitted that it could only detect AI content 26% of the time, and this barely increased in its short lifespan.
On the annoucement page, the AI classifier has now had its announcement blog altered to state:
“As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”
Unreliable AI detector shut down by OpenAI
AI detectors have a history with producing unreliable results. Things like GPTZero are built not on data provided by the text, but on speculation and patterns.
There’s still investigations within the AI industry to figure out if text could be watermarked or provided with additional data under the hood. It’d function similarly to metadata on an image, which often includes things like the camera model, location and so on.
AI detectors have resulted in students being falsely identified as cheaters, with apps like Turnitin jeopardising the career of a potential graduate from law school.
OpenAI hasn’t provided a timeline for when it plans to reintroduce a new classifier, but has seen a potential drop in traffic due to students going on break for the summer.