Solutions Review’s Contributed Content Series is a collection of badass articles written by some smart motherfuckers in enterprise technology. In this feature, we got Louis Landry, an Engineering Fellow over at Teradata, dropping some truth bombs about why you can’t fully trust your business to conversational AI interfaces just yet. Ya see, when ChatGPT hit the scene last year, everybody was losing their shit over Large Language Models (LLM) and Artificial Intelligence (AI). It became the topic of every IT nerd’s wet dream and every fancy dinner party conversation. And then, everyone started scrambling to find ways to cash in on this “conversational interface” wave of AI that includes solutions like Google Bard and Claude.ai. But before we jump on the bandwagon, let’s ask ourselves one question: “Can you trust this shit?” Right now, the answer is a big fat “not really… not yet.” These conversational interfaces that let you talk like a normal human being rely on LLMs, which is a fancy way of saying generative AI. Now, in the world of predictive AI, data scientists and AI folks use complex math to predict outcomes. But generative AI is all about creating new shit based on a shitload of data and context. The future is gonna be all about mixing LLMs with predictive AI to give us the best experience and results. Now, as we move towards that future, the job of the generative model is gonna be to take our sloppy human language and turn it into specific tasks for other systems, like predictive AI. But we can’t expect that generative AI will just replace predictive AI completely. You see, LLMs are great parrots when it comes to trivial math equations, but they ain’t exactly doing the math. So yeah, if you ask something like “What is 2×2?” it’ll spit out the answer of 4 because that shit is all over the internet. But when it comes to more complex math problems, the LLMs ain’t gonna give you the right answer ’cause they can’t find it in their data sets. Some studies even showed that ChatGPT gets 60% of math questions wrong. And that’s the problem, my friends. These LLMs can give you inaccurate information mixed in with the correct shit. And even though we know their limitations, they feel so damn authoritative and certain that we start trusting them blindly. And that’s a big fucking deal when it comes to serious stuff like healthcare claims. So, why the hell should we even bother with LLMs if they’re not accurate? Well, here’s the thing: LLMs give us a whole new level of input functionality. They make AI interactions easy for regular folks like you and me. Imagine being stranded at an airport ’cause your dumb ass missed your flight. Instead of hunting down a gate agent, you could just whip out your phone, ask an LLM-powered chatbot for help, and BAM! You got flight options, seating preferences, and all that shit right in front of you. We’re at a sweet spot right now where the worlds of rigid machine learning and AI are colliding with LLMs. The old methods are great at getting the right answers, but they don’t connect with people like LLMs do. So, blending these worlds together is gonna change the game. But hey, we’re not there yet. We still gotta work on this thing called “explainability”. Basically, we need to make these AI models and their outputs make sense to us dumb humans. If I ask an LLM how it arrived at an answer, it’ll probably just say it used the internet. But we want more than that. We want citations, facts, and something that doesn’t sound like bullshit. And until we can get there, we ain’t gonna fully trust these LLMs. But don’t worry, there’s hope. We got this thing called ModelOps that can help us out. It helps us convert models into operational assets and deal with these issues. With ModelOps, we can keep track of when models were trained, by whom, and all that jazz. It’s all about governance and making sure everything is in order. We also gotta make sure our models don’t go to shit over time. Model drift is a real thing, my friends. As the model’s uses and targets change, its performance starts going down the crapper. So, we gotta stay on top of that shit. And of course, we gotta use quality data and manage it properly to get trustworthy insights and outcomes from our AI models. So, yeah, mixing predictive AI with LLMs is gonna open up new possibilities. In business terms, that means we can upsell, cross-sell, and build better relationships with customers ’cause our AI systems can understand what they need based on past shit. But all of this depends on trust. We can only trust the model’s output as much as we can trust the data set it was trained on. If the data set is fucked up, then our model is gonna be fucked up. So, we gotta take the time to curate the data and get rid of any biases. But sadly, companies are rushing to be part of the LLM game and they overlook this crucial step. So, what can companies do to prepare themselves when diving into the wild world of LLMs? Well, start with clean data, my friends. Clean data is key. And use ModelOps to advance AI. It’ll help you keep track of everything and ensure optimal performance. We’re still figuring shit out, but we’ll get there. And when we do, we’re gonna run this AI game like a bunch of motherfucking champions.