Critical Questions About AI That Every Executive Team Should Be Asking
Carole Switzer
Co-Founder of OCEG, a global nonprofit think tank that provides standards, guidelines, and online resources to help organizations achieve Principled Performance.
I’ve been chatting about the challenge of establishing strong governance, management, and assurance over the use of artificial intelligence (AI) with Lee Dittmar, President of Business Solutions, Inc. Lee was one of the first thought leaders to collaborate with OCEG in the creation of GRC more than 20 years ago and he advised leading tech companies on the earliest development of systems to support GRC. We’ve continued to discuss the evolution of GRC over all these years and its always a thought provoking conversation. The topic of GRC for AI has brought all of our thinking about what is needed to oversee the performance, risk management and compliance aspects of business processes together in ways that we have contemplated for years and now can bring to fruition. As we all read in the news every day, AI is increasingly being used across businesses to improve operational efficiency, customer experience, and decision-making. However, with the potential benefits of AI come significant risks and challenges that executive teams must address. But are leadership teams asking the right questions?
To help navigate the complex landscape of AI, here are five critical questions that Lee and I think every board and executive team should be asking:
1. Do we know which business units, departments, or functions are already using AI, in what ways and for what purposes?
This is an essential question but getting an accurate answer can be a challenge. With the increasing availability of open-source AI frameworks and cloud-based AI services, different teams within the organization might be using AI without the knowledge of the IT department. Also, some AI applications might be embedded within larger software systems, making it difficult to identify their usage. To get an accurate answer, executives need to work with the IT department to conduct a thorough audit of all the systems being used within the organization.
2. Do we have any defined and documented governance processes for the development, deployment, and use of AI?
Establishing governance processes for the development, deployment, and use of AI is essential to mitigate the risks associated with its use. However, this is easier said than done. AI is a complex technology that involves multiple stakeholders, including data scientists, IT professionals, and business leaders. Developing a governance framework that is comprehensive and flexible enough to accommodate the needs of all stakeholders can be a significant challenge. To answer this question, executives need to engage with all the stakeholders involved in the AI development and deployment process to understand their needs and develop a governance framework that addresses their concerns.
3. Do we have a rigorous methodology for evaluating gaps, overlaps and risks in our use of AI?
Evaluating gaps, overlaps, and risks associated with AI use is crucial to ensure that the organization is making informed decisions about its use. However, evaluating AI systems can be challenging due to their complexity. AI systems often involve multiple algorithms, data sources, and models, making it difficult to understand how they work and identify the gaps, overlaps, and risks associated with their use. To answer this question, executives need to work with data scientists and other AI experts to develop a rigorous methodology for evaluating the risks associated with the use of AI.
4. How do we identify and manage the reputational, relational, regulatory, and operational risks associated with our use of AI, and do so with sufficient agility to keep up with the velocity of change?
Identifying and managing the reputational, relational, regulatory, and operational risks associated with the use of AI is essential to ensure that the organization is not exposed to unnecessary risks. However, identifying and managing these risks can be challenging due to the fast-paced nature of the technology. The risks associated with AI use are constantly evolving, and organizations need to be agile enough to keep up with the changes. To answer this question, executives need to work with legal, compliance, and risk management professionals to continually identify and manage the risks associated with AI use.
5. How do we ensure that the algorithms and models developed in our AI systems are explainable, reliable, and trustworthy?
Ensuring that the algorithms and models developed in AI systems are explainable, reliable, and trustworthy is crucial to building trust in the technology. However, achieving this goal can be challenging due to the complexity of AI systems. To answer this question, executives need to work with data scientists and other AI experts to develop algorithms and models that are explainable, reliable, and trustworthy.
These are just the first five of many important questions that executives should be asking about GRC for AI. For a more comprehensive list, check out The Top 25 Questions Leadership Must Ask About GRC For AI.
By asking the right questions and taking a strategic approach to AI governance, your organization can harness the power of AI while minimizing risks and maximizing benefits. Overall, the key to successful AI implementation is to have a holistic understanding of your organization's use of AI, to implement effective governance processes, and to proactively manage the risks associated with AI. With these elements in place, your organization can reap the benefits of AI while minimizing its risks.
Featured in: Artificial Intelligence , Free