Alright folks, we got some interesting news coming in from OpenAI. These guys are the brains behind ChatGPT, that incredible AI software. And guess what? They’re working on a tool that will help us classify where these amazing generated visuals from their DALL-E 3 image generator are coming from. How cool is that?
During the Tech Live event organized by the Wall Street Journal, Mira Murati, the technical director at OpenAI, spilled the beans about this new technology they’re cooking up. Now, listen up, ’cause this tool, which is still in the testing phase internally, could identify whether an image has been created by the DALL-E 3 with an impressive accuracy of nearly 99%! Yeah, you heard me right. That’s almost 99% accuracy, folks. Murati dropped this bombshell right there on stage.
Now, hold your horses, ’cause Murati was tight-lipped about when exactly this detection tool will hit the market. But don’t you worry. She promised to deliver more juicy details real soon. Gotta give it to her for keeping us on the edge of our seats.
But hey, let’s not forget that Google isn’t slacking either. They’ve got some new features in the works to help us regular folks determine whether a photo has been generated by some AI wizardry. They’re adding a cool button called “About the Image” that’ll give us all the contextual information we need about its creation. We’re talking upload dates, and even if it’s been identified as fake news by reputable news outlets. It’s like our personal AI lie detector for images!
And get this, even those images from midjourney AI or the ones from Shutterstock, are gonna be labeled by the wizards at Google. They’re leaving no stone unturned, folks.
Now, here’s a little tidbit about OpenAI. They recently removed a little telltale sign from ChatGPT’s generated texts. You know what it was? It was that “regenerate response” message at the end of the texts. Yep, they used to leave that breadcrumb for us to spot. But not anymore! Those sneaky developers fine-tuned their creation and even rephrased some of the sentences to make them sound more human-like. Can’t let us catch on too easily, right?
Back in January 2023, OpenAI launched a detection tool and made it clear that it’s pretty darn hard to reliably detect all the texts written by AI. I mean, let’s face it, these AI geniuses keep pushing the boundaries, making it harder for us to spot their creations.
Alrighty then, that’s it for now. This news is courtesy of Mashable. Stay tuned for more updates, my friends.