Artificial Intelligence (AI) is coming under increasing scrutiny in the boardroom, and for good reason. It helps drive digital transformation and creates ethical risk—and boards need to examine the firm’s progress on both these issues.
On digital transformation, it’s clear that more than ever, companies are turning to AI to cope with an ambiguous, complex, uncertain and volatile future in the aftermath of the pandem-
ic. Almost every industry may be upended. Yet, business faces a foundational challenge in getting started with AI.
On ethics, there’s a long list of firms that have suffered reputational damage following an AI deployment. The AI Incident Database, run by the non-profit Responsible AI Collaborative and modeled on incident databases in healthcare and aviation, had logged 176 incidents as of April 2022, some from the biggest brands in tech. In each incident, AI caused (or nearly caused) real-world harm, including physical injury or death, financial damage or harm to civil liberties.
We at BCG think that the way to meet these twin challenges and unlock the benefits of AI deployment at scale is the concept of the social license. It is a human-centered approach, in contrast to the widely used Responsible AI framework that has a greater emphasis on technology.
This term social license may have been used for the first time at a World Bank meeting in 1997 by Jim Cooney, an executive with the Canadian mining company Placer Dome, whose tailings dam at a gold mine in the Philippines collapsed and released toxic mud that buried a village. He pointed out that if mining companies lost their social licenses, local and national communities would show little hesitation in shutting them down, even if they complied with formal regulations on land acquisition, environmental pollution and water use.
Companies cannot award themselves social licenses. In his 2014 book, The Social License: How to Keep Your Organization Legitimate, John Morrison, chief executive of the Institute for Human Rights and Business, said businesses must win their social license by proving they can be trusted.
Although the paths that businesses can take to obtain a social license for AI are not yet clear, the first signposts are becoming visible. The approach will need to vary by the problem that AI is tackling and the number of stakeholders involved in the process.
AI-powered self-driving car technology needs approval from a wide number of stakeholders, such as automobile owners and drivers (consumers); city, state and federal governments (regulators); and civil society (critics and advocates).
An AI project that enhances productivity in the workplace, in contrast, may need a social license only from employees, possibly trade unions and shareholders.
Gaining this social license is easier if the stakeholders are better informed. According to the BCG-MIT Sloan Management Review 2020 AI survey, 67 percent of employees who don’t understand AI also don’t trust AI-based decisions, while the number is just 23 percent among employees who have come to grips with the technology.
However, companies need to go beyond employees and educate customers, shareholders, civil society and the general population about the advantages and disadvantages of AI as part of their license-seeking process. They also need to educate regulators, who always find it difficult to keep pace with technology. Regulators will welcome the help of business leaders to identify AI-related policies that balance innovation, creative destruction, its economic impact and societal fairness.
This may seem like a long journey. But the starting point is clear: Corporations must realize that using AI, no matter how responsibly it is designed and how rigorously it is tested, will not be accepted automatically by society. Businesses may enjoy the legal right to use AI, but they must obtain a social license from all stakeholders to deploy AI at scale.