What Directors Think: Balancing AI Risks With Opportunities In 2025

Chart depicting board members' responses on AI risk
Source: Diligent Institute
Our recent survey of 200+ public company board members reveals a need for businesses to weigh AI-driven innovation with core company values.

AI has been moving rapidly from a future consideration to a present-day priority, and recent findings from What Directors Think 2025—a Corporate Board Member survey conducted in partnership with Diligent Institute and FTI Consulting—shows the technology’s influence on the business landscape is continuing to grow.

Eighty percent of 200+ board members surveyed say their company has taken action concerning AI, with 44 percent saying they’ve already incorporated AI or generative AI into one or more areas of the business, including products and services, and 37 percent having designated responsibility to a senior leader like a CTO, CIO or Chief AI Officer (CAIO) at board meetings when AI is discussed.

But with business growth seen as the number one priority for directors in 2025 (more than three-quarters identified this as their primary objective in the survey) and AI adoption viewed as increasingly non-negotiable in the corporate world, how can AI-powered digital transformation really help businesses to scale more quickly and more economically?

Many directors believe AI has the potential to optimize operational and cost efficiency, but factors such as increased employee productivity and access to better data are also seen as key opportunities.

The latter two have risen three and four places, respectively, when compared with the previous 2024 report, with boards increasingly seeing these as important use cases for artificial intelligence alongside other aspects like product innovation and enhanced customer support capabilities.

While the majority of boards are already using AI in at least some capacity, 1 in 5 have taken no action at all regarding AI or generative AI—highlighting that many businesses are still treating this area with caution. Core anxieties such as a lack of internal AI knowledge and data privacy concerns may perhaps be seen as outweighing the opportunities in the eyes of some board members.

Boards are under increasing pressure to adopt AI

Driven by a fear of falling behind competitors and growing customer expectations, boards and leadership teams may feel forced into an accelerated adoption path with AI. But of course, there’s an important “promise and peril” dilemma for businesses to navigate, particularly given the very public discussion about the potential dangers of generative AI.

Although global harmonization of evolving global AI regulations is the nirvana for legal and compliance teams, we are seeing a divergence in approaches born of different geopolitical priorities. Governments are currently deciding how best to govern AI, as they seek to find a balance between an innovation-focused agenda that targets the potential for incredible social and economic transformation and the need to govern AI systems in a manner that addresses concerns about legal, ethical and societal harms.

On one end of the equation, we see a much stronger emphasis on protecting citizens’ rights and ensuring the ethical use of AI technologies. The primary example is the EU’s AI Act, which creates a comprehensive regulatory framework for AI governance that aims to offer a high level of protection for health, safety and fundamental human rights. It seeks to ensure AI becomes a “force for good” and the focus is on promoting human-centric and trustworthy AI.

On the opposite end of the scale, we see an emphasis on an innovation narrative, where regulation is oftentimes positioned as an inhibitor to progress and innovation. In a manner not dissimilar to companies and their boardrooms, governments are also under pressure not to fall behind geopolitical competitors. The regulatory focus so far in many other countries, including the U.S. and UK, tends to be away from a comprehensive framework and enforceable law like the AI Act. Instead, the focus is on a sector-specific approach that considers guidelines and standards in different industries.

“AI undoubtedly requires strong governance,” says Dale Waterman, global solution designer lead at Diligent. “The issue of competing values is not a new one for governments and the technology sector. We’ve been grappling with the need to find the right balance between the competing interests of privacy and national security for many years. Creating an environment for AI innovation while protecting timeless societal values and ensuring the ethical use of AI is, arguably, one of the defining issues of our lifetimes.”

A lack of ‘AI literacy’ is seen as the biggest risk for businesses

While directors feel there are significant AI-driven opportunities to be exploited in areas such as cost efficiency, data processing and employee productivity and engagement, several apparent risk factors remain concerning the use of generative AI in particular.

Among these concerns is a perceived mismatch between the technology’s capabilities and the knowledge and ability of board members to make informed decisions. Almost a third of directors cite procurement and implementation of AI as a considerable risk. The intersection of many conflicting principles with data privacy, the tendency of generative AI tools to “hallucinate,” the lack of risk and strategy subject matter experts and IP infringement are seen as significant potential risks.

Without a solid foundation of AI literacy, leaders risk making short-sighted choices. The geopolitical divide in AI governance further complicates the delicate balance between innovation and ethics, but boards that proactively educate themselves will be better equipped to interpret and respond to evolving regulations.

“Boards are racing to harness AI’s potential, but they must also uphold company values and safeguard the hard-earned trust of their customers, partners and employees,” says Waterman. “During a time of regulatory uncertainty and ambiguity, where laws will lag behind technology, boards need to find a balance between good governance and innovation to anchor their decision-making in ethical principles that will stand the test of time when we look back in the mirror in the years ahead. AI literacy will be the foundation for that sound decision-making.”

Where to Next?

If boards can successfully navigate the tricky balance between AI governance and innovation while addressing the pressing need for AI literacy, they stand to benefit from valuable opportunities such as increased cost efficiency and employee productivity—at the same time mitigating the risks associated with AI knowledge gaps and the potential for data vulnerabilities.

Boards that prioritize AI education and align their digital transformation strategies with their core business values will be better positioned to adapt to evolving regulations and maintain a competitive edge in an increasingly AI-driven corporate landscape.

Read more about what directors had to say about the year ahead>>


  • Get the Corporate Board Member Newsletter

    Sign up today to get weekly access to exclusive analysis, insights and expert commentary from leading board practitioners.
  • UPCOMING EVENTS

    SEPTEMBER

    16-17

    20th Annual Boardroom Summit

    New York, NY

    NOVEMBER

    13

    Board Committee Peer Exchange

    Chicago, IL

    MORE INSIGHTS