Editor’s Note: The framework below was created by Dominique Shelton Leipzig, Partner, Mayer Brown and Paul Washington, who leads The Conference Board’s ESG Center following the first annual Digital Trust Summit, a conclave of about 75 key leaders in business, government and academia convened in March at Brown University by Shelton Leipzig and Mayer Brown, The Conference Board, Nasdaq, The Watson Institute at Brown University and Bank of America.
It is part of a new initiative to bring together CEOs and Board members to engage in a thorough discussion of ways of shaping digital culture in the age of AI to create true data leadership – thereby driving value and, importantly, reducing exposures to loss. Corporate Board Member is supporting this ongoing work, and we’ll have more in the months to come. More information >
The latest developments in Artificial Intelligence (AI) promise to have a profound impact on business and society. They will test companies’ ability to address the business opportunities and risks associated with yet another transformative technology. They will also offer a test case for corporate boards’ commitment to ESG and multi-stakeholder capitalism.
We offer nine recommendations for boards to guide their companies in maximizing AI’s business benefits, while responsibly balancing the welfare of a firm’s multiple stakeholders and society at large.
1. Engage with the technology for first-hand understanding. For decades, companies have used AI to perform routine manual tasks and to assist human decision-making. More recently, AI has come closer to mimicking human thinking itself, with tools predicting behavior and, now, with “generative AI” tools such as ChatGPT, to creating written, oral, and visual content. For boards to provide effective oversight and make key decisions regarding their company’s use of AI, they need to understand it. Management should provide boards with the opportunity, in a controlled environment, not only to see the technology, but also to engage with it first-hand, and to do so on an ongoing basis as the tools evolve.
2. Ask management to provide an overview–and then regular updates–on how AI intersects with the company’s business. Boards will need to develop a firm grasp of how AI affects the company’s activities in the marketplace (the products and services a company sells or buys), the workplace (operations and employees), and the public space (including government relations, communications, and corporate social responsibility). As they focus on AI, however, some boards may benefit from a more general update on how technology is transforming the business; according to a recent survey of 600 U.S. C-Suite executives, only 50% say their boards have a good understanding of how the digital transformation is affecting their business.
3. Integrate AI into Enterprise Risk Management (ERM). Boards should ensure that AI is considered as part of the company’s ERM program. This should include not just generative AI, but predictive and other forms of AI that raise many of the same legal, ethical, and reputation issues. It is also critical to consider the multiple, interconnected areas of risk associated with AI. For example, AI may serve as a catalyst for boards to view data security, data privacy, intellectual property, and antitrust not in separate silos, but collectively under the heading of “data protection” or, perhaps even more accurately, “knowledge protection.”
4. Incorporate AI into “Enterprise Opportunity Management.” AI could also lead companies to consider how they can more systematically identify, evaluate, escalate, address, and report on business opportunities throughout the year. The traditional annual strategic and business planning processes may not be keeping pace with the rate of business change or fully tap the broader knowledge within an organization.
5. Implement evolving controls. Boards should expect that the company’s controls over the use of AI will change as the underlying technology, its application in the company’s business, government regulation, and industry practices develop. At the outset, boards should expect the CEO to provide the organization with an overview of how AI is likely to affect the company’s business as well as an initial set of principles to guide the use of AI. A good starting list includes ensuring that any use of AI retains the trust of your stakeholders, is fair and unbiased, protects your systems and data, is transparently disclosed, and is fully compliant with laws and regulations.
6. Ensure management is devoting sufficient cross-functional resources to this area. No board wants to discover that it has skimped on data protection, and many corporate audit and risk committees meet in executive session with chief technology and information security officers to ask whether they have the resources necessary to protect the company. With generative AI, boards should ask not just whether management has the resources it needs (including in areas such as government relations, see below), but also how management is coordinating its efforts across corporate departments and business units to ensure a consistent and coherent approach to this technology. As a model, companies may wish to use the cross-functional groups they have established to guide sustainability efforts.
7. Consider how to address AI at the board level. With upcoming SEC and EU rules imposing obligations on boards with respect to cybersecurity, boards will be taking a fresh look at their role with respect to data security. That should trigger a broader discussion of how they will address AI. Boards may want to begin by asking management to focus on AI as a discrete topic meriting “a deep dive” at a board meeting. Soon thereafter, management should begin incorporating AI in existing board reports and processes. AI might, however, prompt boards to consider broader changes in how they are structured and spend their time. Traditionally, the full board takes the lead on strategy and operations–the opportunity side of the business. Indeed, only 15 percent of the S&P 500 have a committee focused on technology or science, and just two percent have a strategy or planning committee. Even if boards do not establish a standing committee, they may want to consider asking a few directors to focus on AI and other strategically important technological developments.
8. Engage in the development of public policy. Government is not writing on a blank slate here, with numerous laws on the books relating to data security and privacy, intellectual property, and consumer protection. Nonetheless, companies will be addressing AI against fast-moving, and sometimes conflicting, public policy developments worldwide. Last October 2022, the Administration announced an AI Bill of Rights; in April, the Commerce Department issued a request for suggestions for the responsible regulation of AI; and in May, the White House announced $140 million in funding to support collaboration among industry, federal agencies, and academia to drive AI breakthroughs. Meanwhile, several bills to regulate AI have been introduced in Congress, while California, among other states, is considering legislation and regulation of AI. In December, the EU released a draft Artificial Intelligence Act. And countries from China to Egypt to the UK have entered the fray. In the past, government regulation has often focused on “shaming” companies for data breaches; generative AI presents the need for the private and public sectors to closely partner to develop regulatory regimes that responsibly harness the opportunities of the technology.
9. Approach generative AI with a multi-stakeholder ESG mindset. Boards will need to balance the competing expectations and interests of multiple stakeholders when it comes to AI. They will want to view consumers not just as a source of data, or employees as a resource that AI bots can replace, but as stakeholders whose long-term interests this technology can serve. Similarly, companies should focus on their social and environmental impacts of AI (data server farms, for example, can use a lot of energy).
Generative AI is a potential fulcrum for significant economic and societal change. As Mo Gawdat wrote in Scary Smart: The Future of Artificial Intelligence and How You Can Save the World, one day machines will likely be smarter in many ways–more knowledgeable, more able to assess risks and opportunities, more analytical and more creative–than any individual on the planet.
That would place humankind in a different place in the pecking order, potentially changing our perspective on how we view our role in the world. That moment, if it comes, is still years away. In the meantime, boards should approach AI with a steady hand, an open mind, clear values, and a culture of continuous human learning.