So, check this out. A little while ago, there was some crazy news out of Mason City, Iowa. Apparently, the school district there was all hooked up with ChatGPT, using it to enforce this ban on books that supposedly had descriptions of sex acts. But get this, one of the books they flagged was “Friday Night Lights” by Buzz Bissinger, and the author himself said there ain’t no descriptions like that in his book! Can you believe that?
Now, they eventually backtracked on the whole thing, but it just goes to show you how some government officials are hopping on the AI train and it’s causing some problems. I mean, they didn’t even read the damn books before they banned them! This whole situation was talked about by journalist Todd Feathers, who wrote about it in Wired. Here’s a snippet of their conversation.
According to Feathers, one of the most common ways AI is being used by the government is to summarize meetings and create PowerPoint presentations. But there are some cities out there that are pushing the boundaries and trying to use this technology in more unique ways. Like in Seattle, they’re thinking about using generative AI to simplify these super dense reports from the Office of Police Accountability. Makes sense, right? But here’s where it gets tricky. Putting sensitive information like that into corporate databases can be a big risk. You never know how that info might be used.
And get this, the state of Maine ain’t takin’ no chances. They imposed a six-month ban on their public employees using any kind of generative AI tech. They want to see how all the cybersecurity stuff plays out first. I mean, when OpenAI first launched ChatGPT, there were all these funky things happening like prompt injection attacks and users seeing each other’s chat histories. It’s a whole new technology that still needs some fine-tuning. So, Maine is like, “Nah, we’ll sit this one out for now.”
Now, as for how cities big and small are using AI, Feathers says there isn’t really a noticeable difference. It’s more about the personalities and politics within those governments. Take Seattle and Boston, for example. Both of ’em released guidelines for using generative AI. But Boston, they’re like, “Yeah, go ahead and experiment with this cool tool!” Seattle, on the other hand, is more cautious. They wanna make sure that if their employees wanna use AI, they gotta prove it’s gonna benefit the people of Seattle and follow strict rules. They ain’t just letting ’em go wild with it.
And let’s talk politics for a minute. Now, the Mason City case in Iowa had some political undertones, but Feathers doesn’t think the use of these tools falls along party lines. It’s more about government employees looking for easy solutions when they don’t wanna do their jobs or when things get complicated. But here’s the problem: AI tools can spit out false information that seems real. That’s a dangerous game to play.
So, will we see more cities using AI in the future? Well, according to Feathers, it’s not really up to them. These generative AI tools are being integrated into existing products that government employees already use without much decision-making. It’s like they’re just handed this stuff and told to figure it out. It’s a wild ride, my friends.
If you wanna read more about how state and city governments are navigating the AI landscape, you can check out Todd Feathers’ coverage. And there’s even a chatbot assistant being developed in Amarillo, Texas that’s gonna be up and running next year. They’re really diving into this AI world, y’all. Hang on tight!