Within the last year, AI adoption has become a major topic of discussion among corporate board members as companies search for ways to use AI technology to compete more effectively within their industry. Corporate Board Member columnist Matthew Scott recently spoke with Diligent CEO Brian Stafford, who shared insights about what he has learned about AI adoption from the 700,000 board members, CEOs, CFOs and GCs that use his company’s board portal and other services. The following is lightly edited for length and clarity.
What are the top challenges associated with AI adoption expressed by corporate board clients at Diligent?
In the area of AI adoption, the top questions clients have are, “How widespread is AI being used across the organization, and for how many different uses? What are the policies around how AI is used within the organization? And for each of the different use cases of AI, how does the company manage the potential risks associated with those use cases, whether it is hallucinations, privacy issues or third-party content—how should you use it and how should you operate?”
Many of our clients are also trying to understand the risks associated with the opportunity transition of workforce and skills associated with AI adoption.
Understanding the impact AI adoption can have on the overall organization is critically important. Is there anything about the potential risks you’ve raised that boards should give more attention to?
If you look at all the potential use cases, I think most organizations would tell you that one of the exciting things is how quickly AI has become used by so many of their different employees. So, boards should focus on: What is the governance of AI? How are you using it and where? What controls and safeguards are you putting in place regarding its overall management? And, how are you using AI in areas to drive more productivity, such as helping you improve the code that you write to get products out to market faster, or to create new use cases for services to clients?
Attention to each of those areas will help boards understand how their company is using AI within the organization.
Can you give specific examples of how some boards are dealing with the risks associated with AI adoption?
Many boards are creating policies to help to better govern how the organization uses AI. Because AI has taken off so quickly, there’s a big emphasis from boards on first asking the question, “Where are we using AI in our organization? Who is using it? How are they using it? And what are the policies around usage?”
There are a lot of companies that are still grappling with answering those questions. Many organizations, with oversight from the board, are adding “AI champions” within the organization—someone who is a global head of AI, who can help clarify some of the opportunities that are available and help to understand where and how AI can be used to avoid risks.
What is the role of the “AI champion”?
Many organizations are picking a head of AI or AI champion, in order to have a person who has knowledge of and, in some cases, responsibility for how AI is used within the organization. They might handle an educational component, understanding usage and adoption within different groups of the organization, and ultimately, they can help determine where an organization should not be using AI across its products or services.
What types of policies around AI adoption are your clients saying they are using to help their organization run better?
Most of the policies that we see our clients using around AI are focused on data privacy. So, one policy covers the fact that there are many different large language models—not knowing what data a specific large language model was trained on can open companies up to copyright challenges as organizations.
But there’s also a big concern among many companies around how employees may be incorporating customer information inside their use of AI, which would mean they’d be training a language model based on customer data, which is a violation of privacy for most organizations. So, understanding the technology, what can and should not be used with AI, and creating policies around that is incredibly important.
Adopting AI requires us to talk about AI governance. So, is it the same as dealing with information governance, or is it more like cybersecurity governance for most companies?
Each different set of risks or opportunities within an organization requires a different set of oversight around it. How you would look at oversight on cyber risk is different in framing than how you might look at enterprise risk, which may also be different than how you might look at climate risk.
To ramp up AI adoption in a positive way many organizations have different use cases to help create content faster. But who’s making sure that the content is actually accurate, and checking to make sure there are the right policies around that? If you are using AI to help to accelerate product development and bring new things to market faster, have you determined what type of code that AI was trained on? What policies do you have to ensure you think about that? When you are inputting customer data to provide customer support information, how do you know that third-party organizations have the right safeguards around your customer data? All those different components exist within an organization. Directors have got to make sure that the company understands how these issues affect operations and then decide what the right policies should be around that.
How would you advise most boards go about the process of adopting AI at their company?
First, I would say, it’s highly likely that your organization has many AI use cases going on around it. So, it is as much about understanding what is currently being done within the organization and then asking the management team to articulate its strategy around AI. Whatever industry you operate in, no matter which country, each organization should have a strategy around usage and adoption of AI. There should also be a commitment to govern how AI is being used. Today, we are increasingly seeing governments racing ahead with regulation to try to understand AI. They are asking companies, “How are you in fact, using AI?” The European Union has recently done this.
Boards should start by asking, “What’s our strategy as an organization?” And then ask, “How can we best deploy powerful AI tools to help execute that strategy better and faster?” Then the next question is, “How do we make sure we put the right guardrails in place, or the right governance structure in place to make sure we’re not creating additional risk?” Boards need to address these questions regarding privacy, disclosures, and many other areas of importance to their organization.
On a positive note, we’ve found that the boards of our clients have been going through a process about every 18 months, around asking their company, “What are the new geopolitical risks we face that should be higher on our radar today? How are we thinking about managing that risk, in addition to supply chain risk, or new risks in the product development cycle?” Boards are doing that on a regular basis now and they’re asking the same questions around understanding and managing the risks of adopting AI. Most organizations are focusing on where they can start the adoption process and how AI can help accelerate their strategy. So, it’s important for companies to have people on the board who can think differently about how AI can drive phenomenal progress and positive outcomes for their clients. Mitigating risk and providing good governance means having people at the board level who are asking management hard questions and helping to hold management accountable. That’s what good oversight is.