Technology companies are really pushing the limits with this AI stuff, man. Now I’m all for progress, but there are some serious problems with these large foundational models. They can categorize images and speech, predict text, and more futuristically speaking. But it turns out that they more often than not mess things up instead of offering valuable assistance.
Take the recent case of a Tesla driver who was using the carmaker’s Autopilot software. They ran a red light and got into a terrible accident, which resulted in a $23,000 restitution order. Tesla even had to recall two million vehicles because of it – the US National Highway Traffic Safety Administration found some real problems with the Autopilot’s safety controls. And there’s more where that came from – apparently there are around a dozen lawsuits in the US related to the Autopilot.
The healthcare industry is also getting in on the action. UnitedHealthcare is being sued because they’ve allegedly using AI models to deny necessary care to insured seniors. That’s some serious stuff.
Now I don’t want to be down on technology, but these companies are saying they’re thinking about putting some precautions in place to help these models stay in line. But man, if these models weren’t filled with all this toxic content and harmful material, we wouldn’t need these “guardrails” in the first place.
Listen, I totally get that there’s some real value in AI models. They’re making big waves in decision support jobs, and they’ve done wonders in fields like speech recognition and image recognition. I mean, things have really come a long way since Eliza, the old chatbot. But at the same time, we can’t ignore the cost that comes with relying on these models. There are some serious risks involved, especially if you’re replacing low-wage workers with AI.
All in all, it’s clear that these companies have a long way to go before they figure out the risks and liability that come with these AI models. But I’m sure as hell excited to see what comes next – and I’m not the only one.