So get this, there’s a freakin’ global survey, right, and it shows that the healthcare sector is just behind the tech space when it comes to using AI. Can you believe that? But here’s the crazy part – these healthcare folks are not investing enough in AI protection. It’s like they’re playing with fire, man.
This ExtraHop-commissioned survey asked about 1,200 IT and security leaders around the world some questions about how they secure and govern the use of generative AI tools, now and in the future. And let me tell you, the results were pretty concerning.
Apparently, 73% of the healthcare peeps surveyed said that their employees frequently or sometimes use generative AI tools or large language models. That’s the second highest percentage, right after the tech industry with a whopping 85%. And you won’t believe it, but the government sector came in last place with only 55%.
But here’s the kicker – healthcare organizations are kind of slacking when it comes to managing this AI usage. Less than half of them have monitoring technology or governance policies in place. Seriously, guys, you gotta step it up!
Now, globally, IT decision makers in the healthcare sector are confident in their ability to protect against AI threats. Like, 82% of them are all like, “Yeah, we got this!” Thirty-nine percent provide training for generative AI use, while 30% straight-up banned it. But let me tell you, our friends in the UK are not feeling as confident. Forty-three percent of ’em don’t think they can protect against those wicked threats.
When healthcare leaders were asked about their biggest concerns with generative AI, the top worry was exposing personally identifiable information. I mean, that’s legit, right? You don’t want your personal stuff out there for everyone to see. Trade secrets or intellectual property exposure came in second. Can’t blame ’em for that.
But here’s some good news – globally, most folks are all for investing in generative AI security measures. Seventy-four percent of ’em are like, “Yeah, let’s do this!” But unfortunately, the UK response wasn’t as enthusiastic. Only about half of them were on board. And get this, the UK also has the lowest adoption rate of AI tools. Like, half of ’em reported that their employees rarely or never use AI. What’s up with that, UK?
So, bottom line, healthcare organizations need to step up their game and make sure their AI implementations are secure. This study shows some serious gaps in AI security. And hey, if you wanna learn more about this AI stuff in healthcare, there’s gonna be a Digital Health AI and Data conference later this month. It’s gonna be wild, man!