Ethics In An Era Of Pervasive AI

What boards need to know to galvanize AI ethics programs.

As AI transitions from an “early adopter” phase and enters an “early majority,” many organizations are consistently embracing AI to improve efficiency, and more mature adopters are leveraging AI to differentiate and establish a competitive advantage through enhanced services and products. While companies are increasingly adopting AI, there is a preparedness gap with 56 percent of respondents to Deloitte’s State of AI in the Enterprise, 3rd Edition survey indicating their organization is slowing adoption of AI because of emerging risks, including ethics-related concerns. Many of the concerns noted in the survey touch on principles identified in Deloitte’s Trustworthy AI framework, such as cybersecurity, privacy, accountability, transparency, fairness and reliability.

But what does an ethical direction in AI look like? AI ethics refers to the organizational constructs that delineate right and wrong and are expressed in corporate values, policies, codes of ethics and guiding principles that are applied to AI technologies.

And why should the board and senior leadership be concerned with AI ethics? Unintended outcomes of AI gone wrong are well established, with reputation and consumer backlash not dissimilar to cyber and privacy breaches. In addition, there is growing regulatory interest around the use of data, explainability and transparency in AI. Despite these risks, only about a third of AI adopters are actively addressing the risks and only 36 percent are establishing policies or a board to guide AI ethics according to Deloitte’s State of AI survey.

There are several considerations for the board and senior leadership looking to adopt, evangelize and actively address AI ethics. Among these four ethical introspectives:

• Strategy and Impact: How are we strategically and thoughtfully adopting AI, as well as thinking through the intended impacts and potential risk to our business?

• Accountability: Who is responsible for overseeing the use of AI, including the use of data and outcomes of AI systems?

• Governance: What are we doing to promote trustworthy AI principles (noted above) through governing the use of AI and integrating it throughout our research, development and operations?

• Values: How are we ensuring that the use of AI is representative of our core values and principles established within the fabric of our organization (e.g., trust, transparency, quality, security, privacy)?

Balancing AI innovation with business and ethical risks is not a clear-cut line. Reflecting on the above questions among the board and senior leadership can help reveal the appetite for risk and guide integration of ethical practices throughout the organization.

Four Programs to Help Evangelize Ethical AI Adoption:

• Know where your AI is operating to manage and safeguard it: Determine where and how AI is being used in the organization to evaluate the appropriate use and design of AI, weighing desired benefits with risk exposure.

• Determine appropriate “guardrails” for use of AI: Proactively establish governance structures, organizational, procedural and technical safeguards to guide, assess and monitor use of AI, such as through a framework like Deloitte’s Trustworthy AI.

• Collaborate with external parties on leading practices around AI ethics: Work with industry peers and partners on leading AI solutions and practices.

• Establish policies or a group/board to guide AI ethics: Set the tone and expectations for ethical AI practices and establish a cross-functional team to coordinate and oversee AI initiatives. Crafting a well-rounded program helps organizations implement a sustainable AI ethics strategy. Navigating this path requires the board and senior leaders to play a pivotal role in setting the strategic direction for ethical AI adoption by design.

Four Takeaways for the Board and Senior Leaders When Considering AI Ethics Risks:

• Start from the top: Change starts with the most senior leaders—setting the right tone and priorities from the top is an imperative to establishing an ethical foundation and proper governance and to create meaningful change.

• Align risk-related efforts: Ethics risks don’t exist in a silo—incorporate ethics into your organization’s risk framework that includes operational, security, privacy and more to ensure ethics are similarly applied throughout business and operations.

• Challenge your vendors and business partners: As more organizations buy versus build AI solutions, it’s imperative to collaborate and apply your AI ethical principles and requirements to your extended organization.

• Monitor regulatory shifts: There are increasing frameworks, policies and legislative action around the world regarding AI technologies. Keeping legal, risk, compliance and IT leaders informed is critical to building future-proof systems.

It is easy to get caught up in the appeal and complexity of AI. Those involved with the advancement of AI face a growing imperative to bring an ethical lens to what they design and build and how they protect it.

Ideally, this approach should be articulated through organizational ethical constructs that are applied throughout the AI strategy, innovation and product lifecycle, with each construct reflecting an understanding of AI-related vulnerabilities and opportunities.