The whole AI situation in the workplace is becoming a big deal, especially for those chief information security officers. They’re the ones who have to figure out how to protect against potential security risks while also leveraging the technology’s promise for business processes. It’s a tough balancing act, and that’s why a group of these CISOs is putting together a “quick guide” to help their peers navigate the integration of AI into their IT systems.
Some organizations are hesitant about AI and have even blocked its use until they establish a proper governance structure. On the other hand, there are some who are eager to jump on the AI bandwagon and start using it right away. It’s a mixed bag, and that’s why there’s a need for a universal guide that outlines the fundamental principles of AI use cases for CISOs.
At the InfoSec World 2023 conference, Tom Scurrah, VP of content and programs at the Cybersecurity Collaborative, talked about the different perspectives on AI use cases for businesses. He highlighted that the use cases vary depending on a company’s risk profile, and that’s why it’s important to have guidelines in place.
There was a panel at the conference consisting of members from the Cybersecurity Collaborative industry peer group, including Jason Mortensen from Lenovo, Cheryl Nifong from the University of Texas at Arlington, and Greg Berkin from Lisata Therapeutics. They’re part of a task force focusing on AI security and their mission is to create a quick guide for CISOs, establish governance policies, and create safe operating environments for AI.
Jason Mortensen from Lenovo sees AI as a competitive advantage for the company. They understand that if they don’t embrace it, their competitors will leave them behind. Mortensen emphasized the importance of keeping track of all AI projects within the company to ensure security. Lenovo has implemented guidelines and agreements to understand employees’ intentions when using AI and to ensure collaboration among teams.
Cheryl Nifong from the University of Texas at Arlington talked about how AI has become an integral part of the university environment. They have created an executive work council to oversee the use of AI and develop controls and governance. They are working on policies for students and faculty to ensure ethical AI usage and safeguard valuable data.
Greg Berkin from Lisata Therapeutics takes a cautious approach to AI. As a highly regulated pharmaceutical company, they cannot take risks associated with AI. They have blocked the use of generative AI applications until they have a clearer understanding of the privacy and data loss implications. Berkin emphasized the need to prioritize the safety and efficacy of data and patient health records.
Overall, the panelists agreed that AI brings both opportunities and risks to organizations. It’s important to approach AI implementation with caution, ensuring proper governance, security controls, and user awareness. It’s a complex journey, but organizations need to adapt to stay competitive in today’s rapidly evolving technological landscape.