Every decade or so, a transformative technology sweeps through the business world, threatening incumbents and creating huge new opportunities for the firms that figure it out first. In the 1990s, it was the internet. Today, it’s artificial intelligence.
Just as with the tech bubble, however, AI will create big winners and big losers. Nobody expects corporate directors to fully understand how the technology works or even how it can be applied to the businesses they are responsible for overseeing. But they can ask the right questions. And the most important question at this point is: What’s the return on investment?
“One of the big challenges for directors is that AI is being talked about as transformative for industry, but the ROI is not yet proven for some industries,” says Katherine Forrest, a partner with Paul Weiss who focuses on technology law.
Software code will soon be written almost entirely by AI tools, not human coders, for example. And the retail industry has adopted AI widely across the ordering process, “but it isn’t making all your clothes,” Forrest says. Likewise, the legal profession is experimenting with AI to research and write briefs, but there are lingering problems with the security of confidential information and “hallucinations,” where AI agents deliver exactly what the client asked for—except the case citations and academic articles are entirely made up.
ANTICIPATING RISK
Before approving significant investments in AI technology, directors should ask three basic questions: What’s the ROI, how are our competitors using it, and what’s the timeline for results? The answers will differ according to the industry but the important thing is directors get answers from the managers who will be actually deploying this technology. Only then should directors approve spending shareholder dollars on AI projects. Just as with the internet, AI promises to deliver massive gains in efficiency. But it also exposes companies to massive risks, some of which are almost impossible to predict. Few companies outside the tech realm can afford to invest in the software and server farms that actually run AI programs, so using it means transferring your company’s data to the cloud where it might be stolen or infected with malware by hostile actors.
The tools AI produces can also open up new potential liabilities, such as claims that automated hiring or credit-approval systems are biased against women and minorities. Nobody even wants to think about the lawsuits that may erupt after an entirely AI-driven industrial process goes haywire and injures or kills a person. Is the machine to blame, or the person who pushed the button to start it up?
WHAT TO ASK
The common law in the U.S. has a remarkable track record of adapting liability rules to new technology and will no doubt produce predictable rules for AI. But directors can’t wait for the courts to catch up. They have a duty to anticipate risks and ask tough questions of the corporate counsel, chief technology officer and other managers. Do you have the necessary cybersecurity resources? Have you analyzed the risks? Have any risks materialized already, and how were they managed?
Avoiding AI isn’t an answer, either. Forrest likens companies that haven’t yet adopted the technology to a castle with high walls and a moat around it. Outside, a vibrant new community is growing and the people inside can’t ignore it.
“You need to look around outside the castle and decide if you want to let a drawbridge down,” Forrest says. “But you need to let down a careful, cyber-controlled drawbridge.”