The number of public companies disclosing artificial intelligence (“AI”) as a material risk factor in their SEC filings has grown exponentially from virtually none in 2016 to more than 80 this year alone.
Multinational conglomerate Alphabet, for example, included the following in its 2018 10-K disclosures: “New products and services, including those that incorporate or utilize artificial intelligence and machine learning, can raise new or exacerbate existing ethical, technological, legal, and other challenges, which may negatively affect our brands and demand for our products and services and adversely affect our revenues and operating results.”
Microsoft included in August 2018 that “AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”
AI is—sometimes dramatically—altering or enhancing business models, optimizing workflows, changing workforce composition, and generating new product lines and revenue opportunities. Accordingly, the topic is increasingly an imperative for the attention of the Board of Directors. In addition to materially affecting business models and operations, corporate activities involving AI can also garner significant attention from shareholders, employees, other corporate stakeholders and the community. For example, a 2018 shareholder proposal, joined by protests from employees, civil liberties organizations and academics, sought to ban development of one AI technology of a company until the board took appropriate oversight measures to protect the rights of customers and other stakeholders.
Directors are required under state corporate law—using Delaware law as the most logical reference point for this general statement—to fulfill duties of loyalty and care to the company and its shareholders. Courts generally defer to board decision-making where the decisions are made in good faith without conflict, and when the board has sufficiently exercised its collective judgment, reasonably based on fully informed consideration of facts relevant to the issue. Recent judicial decisions reinforce the idea that fulfillment of these duties requires the board to adequately exercise oversight of corporate activities and risks. This oversight is founded upon actively engaging to develop an understanding of key risks, and establishing and maintaining a means to produce information on a continuous basis for these purposes.
Further, while some profess that AI presents an existential threat, there are also some who believe that shareholder primacy is a societal threat, and advocate for stakeholder corporate governance to fulfill corporate purpose. Put simply, the argument is that directors’ primary fiduciary duty is to serve corporate purposes and promote the long-term value of the corporation and all stakeholders, including customers, employees, suppliers, and communities, and shareholders, so the board is best positioned to execute proper decisions.
The purpose of this article is not to advocate for stakeholder primacy, but to describe the landscape that—at a minimum—further complicates perceptions of whether and how the board should exercise its duties and authority, for example, when employees vocally oppose commercial decisions relating to use of AI technologies, such as the Google walkout regarding Google’s partnership with Customs and Border Protection and Immigration and Customers Enforcement to “streamline” their work. In any case, it is clear that being aware and currently informed of significant implications of the development and/or use of AI technologies for key constituencies is relevant for the oversight that falls within the fundamental governance responsibility of the board.
In particular, a board’s oversight in the context of use and development of AI technologies would likely involve gaining an understanding, to the extent meaningfully relevant to the company, of:
1) AI’s disruption in the company’s industry and its impact on competition as well as impact on the company’s businesses (e.g., how can it enhance or jeopardize its business model);
2) the company’s current and potential development and/or use of AI and resulting impact on the company’s operations, including to optimize performance (this could include boards’ own use of AI tools to augment collective data intelligence and strategic decision-making, including for capital allocation);
3) the data used for the AI technology, where it originates, with whom it is shared, and how it is managed and secured;
4) related regulatory compliance, including privacy laws (based on the significant use of data with these technologies), regulation applicable to the products or services, and ethical considerations; and
5) messaging (or lack thereof) to shareholders, customers, employees and other business partners about the company’s AI development or use.
With this foundation, the board can oversee potential changes, explore risk tolerance, and guide key decisions on the company’s trajectory in the use and development of AI. Further, because of rapidly changing laws and technology capabilities, any changes or significant development in the areas above should be periodically reported to and reviewed by the board. Following that investigation, distillation and public disclosure of material risks and mitigation strategies may be necessary for the protection of the corporation and its owners, and for managing the perceptions of other stakeholders.