So get this, OpenAI is working on this badass tool that can detect if an image was made by AI. This thing is supposed to be 99% accurate, man. Mira Murati, the genius behind the famous ChatGPT and DALL-E image generator, spilled the beans at the Wall Street Journal’s Tech Live conference in Laguna Beach. She mentioned that they’re currently testing the tool internally and planning to release it to the public in the near future. But no specific timeline, bro.
Now, there are already a few other tools out there claiming to do the same thing, but let me tell you, they’re not very reliable. OpenAI actually tried something similar with text detection earlier this year, but it was a flop and got scrapped in July. However, they’re not giving up, man. They’re working on making it better and even expanding it to detect audio and images made by AI too. They know the potential risks, dude. AI can be used to mess with news reports and all that crazy stuff. It’s getting pretty wild.
On top of that, they also gave a little hint about this new AI model they’re cooking up, dude. They didn’t officially say what it’ll be called, but they filed a trademark application for “GPT-5.” Can’t wait to see what’s in store with that, man.
Now let me tell you about this hallucination issue with ChatGPT. Sometimes these chatbots just start making things up, you know? It’s like they’re tripping on some crazy drug. So, when asked if the future GPT-5 model will spout less BS, Murati was like, “Maybe.” They’ve made progress with GPT-4, but they still got some work to do. It’s a trip, man.
Oh, and check this out! There’s speculation about OpenAI making their own computer chips instead of relying on companies like Nvidia. Right now, Nvidia is like the big shot in the market, but Altman, the big boss at OpenAI, said he wouldn’t rule out the possibility of making their own chips. That would be something, bro.
That’s the latest scoop from OpenAI, my friends.