By now, most companies have come to terms with an uncomfortable truth: to wait too long to adopt emerging technology is to risk extinction. Sure, each and every advance promising to boost productivity, build resilience, deliver a better customer experience and so on is fraught with peril. (Think Big Data, machine learning, cloud computing.) Yet, the annals of death by disruption have proven without a doubt that the biggest existential risk by far is failing to move quickly enough.
“As a board, as a leadership team, you have to recognize that one of your top risks in AI is not experimenting with AI,” Donna Wells, a board member at Walker & Dunlop and Mitek Systems, told board members gathered for a CBM roundtable discussion on digital transformation and AI held in partnership with Google Cloud. “If you look at the pace of change to business models that we’ve seen from new technologies in the past—and there’s every indication that AI will be as fast or faster—you really need to focus on the risk of doing nothing.”
Put simply, wait and see won’t cut it. Nor is caution realistic with a technology so readily available to all. AI differs dramatically from cloud computing and other recent advances in terms of accessibility, pointed out David Homovich, solutions consultant in the office of the CISO for Google Cloud.
“With cloud, there was some hesitation with people wanting to see what peers were doing, particularly in industries like financial services, healthcare and life sciences,” he said. “With AI, almost across the board, everybody’s rushing into AI—or in some cases not fully realizing that they are already using a lot of AI internally.”
“Boards and leadership teams would be smart to undertake an inventory to get their hands around where the company is experimenting with AI today,” agreed Wells. “Everyone I know who has gone through that process has been surprised by the extent, depth and breadth of experimentation that’s occurring just in the wild within our organizations.”
“The interesting and really new aspect is that generative AI has really democratized the technology,” agreed Helmuth Ludwig, a board member at Hitachi and Humanetics. “Whether you like it or not, people everywhere in your company are using generative AI. The question is, how do you channel AI use, and how do you get the best effect and the lowest risk for your company?”
Rampant unauthorized experimentation also heightens a multitude of other risks inextricably linked to AI, from loss of intellectual property to poor decision-making due to faulty data inputs. As an example of AI gone wildly wrong, Ludwig pointed to an incident at Samsung in which information that employees working in R&D shared with a large language model ended up in the public sphere.
Confidentiality Controls
Several directors agreed that how best to guard against attempts to harness the power of AI resulting in the loss of material IP is a frequent topic of today’s boardroom discussions. “The insurance industry has been using machine learning, deep learning and so on for decades to identify fraud, for pricing and for other purposes, so we’re extremely familiar with its uses and potential damage,” said Gene Connell, a board member at Erie Insurance. “But large language models (LLMs) are a different animal, and that has raised some concerns throughout the organization and on the board.”
Use of AI to process loan applications, for example, might result in unintended consequences, such as issues with diversity, in the approval process. Or a retailer using a vendor’s AI-based consumer service chatbot might run into issues with inappropriate responses being generated, said Glenn Marino, a board member at Upbound Group. “Am I going to be embarrassed by what this chatbot says? Or worse, have some sort of bias issue?”
Acknowledging that the vast majority of companies are likely to access AI capabilities through a platform provider, several directors expressed concern about mitigating data risk when entering into contractual agreements. “If companies have an agreement with a platform [provider], is that data protected?” asked Nigel Travis, board chair at Abercrombie & Fitch.
“As with any other technology solution in the organization, you need to be able to protect your data,” said Alicja Cade, director of financial services, office of the CISO at Google Cloud, who urged companies to put guardrails around employee use of AI. “The security of AI available for consumers is very different from that of the enterprise version. It’s important that you tie your users to make sure that they are using solutions entirely designed from the enterprise perspective to meet your rules.”
Google’s Cloud AI Portfolio is built across multiple layers. “At the foundation, we have our infrastructure—which is designed to support machine learning on a secure platform.” she explained. “That includes building security by design, default and deployment into everything we do.”
Safeguarding data and IP begins with the confidentiality agreements entailed in enterprise AI platform contracts. “At Google, our data confidentiality agreement outlines that customer data is processed according to customer instructions,” said Homovich, who advises directors to ask questions about their company’s agreements. “By default, Google Cloud doesn’t use customer data to train its Foundation Models. Customers can use Google Cloud’s Foundation Models knowing that their prompts, responses and any Adapter Model training data aren’t used for the training of Foundation Models. That then helps you scope the controls you want in place for security, privacy, compliance, risk management and resiliency [purposes].”
Downstream Data Risks
Directors should also be helping management ensure that proper protections are in place, not only by the platform provider and within their own organization, but by any suppliers involved in the process. “In a lot of companies, I’ve seen that supplier and vendor risk is one of the last risks to be really fully understood within the enterprise,” noted Deirdre Evens, a board member at Regency Centers. “We think about risk management and assessments with Google or Microsoft, but that’s an area we sometimes forget. It’s other vendors using AI themselves that we need to be thinking about.”
It’s a reality of our ever-more-connected world that whenever data flows downstream, risks related to data-sharing flow back upstream. “When we were thinking through cybersecurity in its infant stage, we looked to protect our own house first… it took a while before we started looking to those points of connection into suppliers, vendors and partners,” pointed out Wells. “Hackers took full advantage of those doorways into the secure home environments that we thought we had built. So it behooves us to learn from that experience and be thinking about upstream and downstream risk areas sooner in AI’s evolution than we did, perhaps, with cyber.”
Ultimately, gaps in security anywhere along the chain open a company up to compliance issues. “From a regulatory perspective, you have responsibility for what you’re doing but also what your suppliers are doing,” noted Cade. “So, how do you get visibility of that in terms of protecting data and responsible AI as well? Are you working with a supplier that doesn’t think about what could be used, potentially, for harm, even unintentionally?” Red-teaming, or structured testing efforts that seek to expose flaws and vulnerabilities in AI by generating malicious prompts to test a system’s ability to produce harmful outputs or leak information, can be useful in assessing this risk, she added.
Directors described the task of allowing the pursuit of innovation while also simultaneously guarding data privacy and meeting the regulatory requirements in various markets as daunting. “You have to think about the stakeholder impact—employees, customers, shareholders, the environment, all of that,” explained Sara Mathew, a board member at Freddie Mac, State Street, Carnival and Dropbox. “We’ve tried to cordon off the data, because we don’t want to be in violation of GDPR and then allow the company freedom to work with maybe one or two customers. It’s been really encouraging to see what people have come up with.”
In these early times for AI, taking an active part in shaping the regulatory environment is also critical. “As with cyber, it’s very important that organizations are involved with the industry bodies and have direct dialogues with regulators to make sure those regulations are risk-based, rather than prescriptive or control-focused,” said Cade. “So that engagement is critical.”
The Human Factor
Policies on accountability are another way companies can mitigate risk and minimize the potential for rogue outcomes. A company might, for example, address concerns that flaws in AI-generated information could lead to inaccurate projections or faulty decision-making by clarifying that responsibility lies with the individual employee using AI as a tool, not the AI itself.
“In one of the companies I work with, we have experienced team members who don’t have deep domain knowledge just believing what comes out of LLM,” said Ludwig, who sees lack of awareness about the hallucinatory issues related to generative AI as a significant risk. “Experienced colleagues understand the need to check, which is where you find a big difference in successful application of LLMs.”
“Before this, the user community for technology was your IT department,” agreed Evens. “Now, it’s really anybody. So a good responsible use and notification of use policy is a big help to manage and govern that. But as a board, we also need to ensure the employee base is properly trained and to know what kind of questions we should be asking management about people operating in an environment where AI is now going to be part of the framework of the company.”
As with cybersecurity, employee education and training on the risks and dangers of AI is a must, said Cade. “Users will always be human, and the susceptibility of users falling victim to risks, dangers and scams has to be reduced. This is not just a risk that your CISO owns; it is everybody’s risk in the organization. So, if I were a board member, I would be saying to every executive of the key business unit or function: What is your risk exposure—not only in cyber but in AI? Do you actually know what your exposure is, and what are you doing in terms of trying to reduce it, including user awareness? Don’t treat it as a bolt-on. It’s part of your business.”
It’s equally critical that every director also has an understanding of AI capabilities, best practices and industry standards like ISO 42001 and the NIST AI Risk Management Framework. Gone are the days when one board member could be anointed to serve as the expert on a technology like AI, said Ludwig. “We need to build up competence on the board side,” he said. “At the end of the day, most boards recognize that this is such a critical board decision that we have to make sure that the whole board has adequate knowledge of this topic, as it is really foundational for our company.”
“The ideal approach for boards would be to make sure everyone on the board understands to a certain degree not only the role AI can play in your business and in your strategy but the risks,” advised Cade. “Be aware and really think and operate from the perspective that, just like cyber risk, AI risk is everyone’s business.”