The demand for artificial intelligence as a supplement to daily operations has spiked in the last year. Per Yiwen Lu, writer for The New York Times, this demand has resulted in a mass rush to create generative AI solutions — as well as increase the security of those same products.
AI has served in many varying capacities since its inception, but the most common contemporary use is as a tool that complements, rather than replaces, the more arduous functions of a given employee. According to Lu, Mark Austin—VP of data science at AT&T — implemented a chatbot called Ask AT&T that cut the time employees spent on filling out forms from hours to minutes.
Lu further writes that “developers who used the chatbot increased their productivity from 20 to 50 percent.”
However, using AI as a tool isn’t as straightforward as throwing a chatbot at all of your administrative tasks. One of the primary concerns with AI support is that AI models often use data from the questions and feedback they receive to train future iterations of said models. This isn’t a huge problem when working with the public, but many companies use private or sensitive data; as such, using AI to help with paperwork or coding solutions could result in a leak.
Salesforce’s AI Cloud, a suite of generative AI products, includes some protections that address these concerns, primarily focusing on preventing storage of user data and thus prohibiting retraining of the AI with said data; Salesforce would similarly not be able to view user input. While not perfect, this is a solid step toward securing these kinds of services.
AI has also been known to propagate false information, leading some to wonder about the wisdom of using chatbots to fill out consumer information and similarly administrative tasks. This is compounded by a lack of clear policy or regulation in the industries using AI, though Lu points out that most generative AI use is restricted to lower-risk industries “with a human in the loop”, citing Beena Ammanath, who is the executive director of the Deloitte AI Institute.
Whatever the case, it is clear that the demand for generative AI necessitates stronger regulation, while companies that have yet to adopt such technology should prepare to implement it in the near future.