A Simple AI Governance Framework In The Age Of ChatGPT

Robot hand and human touching on global virtual network connection future interface. Artificial intelligence technology concept.
AdobeStock
When it comes to AI implementation, there are still a lot of lingering questions. This framework can help.

Editor’s Note: On June 13, Glenn Gow will be one of the keynote speakers at our upcoming AI Unleashed event, a concise, half-day, online forum crafted to help board members get up to speed quickly on Generative AI risk oversight responsibilities—and help better guide your strategic conversations with management. Join us >

Since ChatGPT and its artificially intelligent peers have burst on the scene, I’ve been fielding a lot of questions about its impact on the way we will be working. Senior executives and board members understand that generative AI will change the way we work, but the devil is truly in the details. They want to know which jobs? How will they change? And what leaders can do to get ahead of the disruption.

To answer those questions, I find it useful to reference Singapore’s Model AI Governance Framework (AI Governance Framework). The Singapore model, developed to help organizations adopt AI responsibly and effectively, considers AI implementation relative to human beings’ role in its use. (See image below)

The key aspect of this AI governance format is its focus on “harm:”

1. The severity of harm that AI may cause in any particular role, inclusive of economic damage, reputational risk and matters of public security.

2. The probability that this harm may be done, if AI is left to its own devices, without a human in the loop.

The AI Governance Framework therefore provides a very useful model to evaluate which tasks are suitable for greater autonomy of AI, and how much human authority should be preserved in decision-making. Let’s have a look at three potential scenarios in detail:

HUMAN IN THE LOOP: (quadrant 2—upper-right hand)

Description: AI can augment and support human decision-making, but humans remain essential to execution, as the probability and severity of AI doing harm is high.

Examples: Examples of job roles in this quadrant include: medical interpretation (i.e. radiology), software development, legal analysis, pharmaceutical testing, complex customer service problem-solving, human resources (recruitment and performance evaluation), corporate finance and due diligence.
 
Leadership tips: For roles in this quadrant, AI can remove a great deal of the most tedious and time-consuming work. This can make humans much more productive, contribute to their job satisfaction and improve both the quantity and quality of work product. To ensure success, managers must choose wisely the tasks that AI will take on. They must also educate and coach staff to understand AI’s “supporting role” so that human team members still feel a strong sense of agency and responsibility. (see article: What to do with all of that productivity?)

HUMAN OVER THE LOOP: (quadrants 1 and 4—upper left and lower right)

Description: AI can operate without humans, but good judgment encourages human oversight, as either the probability or the severity of harm is too significant to give AI free reign.

Examples: Job roles in this quadrant include autonomous transportation, medical and dental treatment planning, sales and marketing functions including design and written communication, equity/commodity/other asset trading, predictive industrial maintenance, advisory services of all kinds (e.g. financial planning, employee benefits, job scheduling) and treasury management.

Leadership Tips: Due to the superior capabilities of AI in this quadrant, it can frequently be allowed to operate without the constant input of humans. However, due to the probability and severity of potential harm that may be done in these areas, humans must supervise this work and intercede where human discretion is required. Managers should study the risks of using AI in these scenarios, pay close attention to evolving laws governing use of AI and implement robust training programs for human supervisors. Vigilance must also be applied to ensure that supervisors do not become complacent in the face of AI that works perfectly “almost all of the time.” (see article: AI Risk Management for Corporate Boards)

HUMAN OUTSIDE THE LOOP: (quadrant 3—lower left)

Description: AI can make decisions as well or better than humans could, and can operate autonomously, as the probability or severity of harm resulting from use of AI is extremely low.

Examples: Job roles in this quadrant include: product recommendation systems in e-commerce, weather forecasting, supply chain demand prediction, advertising media buying, traffic flow predictions and industrial or commercial building environment management.

Leadership tips: Since humans are out of the loop, very little training or special management is required with respect to the execution of this work. However, it is critical to set up effective performance monitoring as part of a feedback loop between affected staff, the marketplace and your software developers. While this area of AI promises the highest productivity at the lowest cost for organizations, the business landscape is always evolving, so there really is no such thing as “set it and forget it.”

So there you have it. Considering the severity and probability of harm that may be caused in any particular work domain gives organization leaders an effective filter for deciding “How much?” and “How soon?” AI can be deployed. This, in turn, can help leaders develop a roadmap for implementation, starting with those areas where both probability and severity of harm are lowest.


  • Get the Corporate Board Member Newsletter

    Sign up today to get weekly access to exclusive analysis, insights and expert commentary from leading board practitioners.
  • UPCOMING EVENTS

    SEPTEMBER

    16-17

    20th Annual Boardroom Summit

    New York, NY

    NOVEMBER

    13

    Board Committee Peer Exchange

    Chicago, IL

    MORE INSIGHTS