Playbook

Future-Ready Boards

Big Thinking, Practical Oversight

Special report

The AI-Enabled Board

Directors are starting to figure out that AI in the boardroom is less about shiny tools and more about sharpening the one advantage they still have over algorithms: seasoned judgment.

By C.J. Prince

Herman Bulls remembers when he was still doing calculus on a slide rule at West Point, and he recalls the panic when handheld calculators arrived. Students would stop thinking, the logic went, math itself would go soft. That never happened, of course; the calculator just raised the bar for what basic competence looked like. “It served as a productivity enhancement tool,” says Bulls, now vice chair at USAA and chair of Fluence Energy.

As a tool, he adds, AI is phenomenal at pattern recognition, at first drafts and repetitive tasks, but the real test of boards will be whether directors use it to sharpen, rather than outsource, their own judgment. In other words: use the machine to do what it’s good at, then make sure the adults are still in the room.

For a growing number of boards, that’s becoming the working definition of “AI-enabled governance.” Directors aren’t trying to turn themselves into prompt-engineering wizards so much as figuring out where a tireless, pattern-spotting apprentice can change the economics of oversight—compressing 100-page decks, surfacing anomalies management didn’t flag or pressure testing the rosy assumptions behind a capital request. The promise is less about robots in the boardroom than about narrowing the information gap between what management knows and what the board can realistically digest before it has to vote.

But as Bulls and other directors point out, that same power also raises the bar on directors’ own competence and accountability. If AI makes it easier to interrogate historical performance, compare strategic options and see around corners, it also becomes harder to claim you simply didn’t know where the risks and weak links were. The boards that come out ahead, they argue, will be the ones that lean into AI as an extension of their judgment—not a shortcut around it. They’ll also be the ones that treat AI less like a technology problem and more like the business judgment problem it actually represents.

In interviews with board members and AI experts, a handful of key practical lessons emerged about what effective AI-enabled governance should (and shouldn’t) look like.

1.

RAISE THE BASELINE: AI LITERACY IS NOW TABLE STAKES.

The elephant in most boardrooms is a knowledge gap that goes largely unspoken. Beena Ammanath, global head of the Deloitte AI Institute, offers a stark number: 79 percent of board members report limited or no AI knowledge.

Cheemin Bo-Linn, director and audit chair with KORE Wireless and advisory board member at Bain Capital Ventures, calls this the “attention and knowledge gap” phase: highly aware of the stakes but far behind on the foundational understanding required to perform duty of care and duty of loyalty in an AI-driven economy. “They all think it’s important,” says Bo-Linn, “but many of them are so behind in the education—and even if they’re up on the education, they’re behind in understanding what makes a successful AI implementation. So, therefore, it’s more difficult for them to ask the right questions.”

The most practical path to competence isn’t attending a university executive program or hiring a consultant to give a one-time presentation. It’s actually using the technology. “Nothing beats hands-on experience,” says Florin Rotar, group CTO at Atos and Avanade’s former chief AI officer.

Bulls tells his peers to start with something that won’t end up in discovery. “Use ChatGPT to write a letter to your son or your mother to see the power of it.” Then, he draws a hard perimeter. “I don’t want anybody in the company using open-source GPT to do anything relative to proprietary information.” A similar, equally critical qualifier: use the advanced versions. “Make sure you get the good stuff and not the free AI, which is dumb as a cat,” says Rotar, who adds that the difference between free and paid AI is “like speaking with a middle schooler or speaking with somebody who has quadruple PhDs.”

Jeanne Beliveau-Dunn, an independent director at Columbus McKinnon, Edison International and Crewdle, agrees regular use is a must. “Everyone should be using AI tools daily in their lives to understand the power and impact, as well as what they cannot do,” she says, adding that the technology’s capabilities “continue to improve exponentially.”

A second lever is embedding AI education into existing board activities. Bulls has made it a quarterly requirement at his company that board training attendance be published, which signals expectation while normalizing continuous learning. Many boards are experimenting with offsite sessions where directors work through actual prompts together, learning how to refine queries and validate responses—demystifying the tool through direct engagement rather than theoretical instruction.

I don’t want anybody in the company using open source GPT to do anything relative to proprietary information.

—Herman Bulls, USAA and Fluence Energy

Thought Leadership By BCG

Leadership turnover at the top of corporate America is accelerating. The average tenure globally of outgoing CEOs was 7.1 years in 2025, down from a high of 8.3 years in 2021, according to Russell Reynolds CEO Turnover Index. Lower tolerance for poor performance or conduct, rising activist pressure and intensifying technological, geopolitical, demographic and stakeholder demands are reshaping the role—favoring agility over stability. For boards, the implication is clear: CEO succession is becoming both more frequent and more consequential.

Yet, while many corporate boards profess a desire for “future-ready” CEOs, their selection processes often remain traditional and backward-looking. In a world defined by complexity, uncertainty and disruption, that mismatch creates risk. Boards can strengthen succession and selection by embedding future readiness as the primary lens across the hiring process. Four practical shifts can move boards closer to that goal now.

1. Start with the Future, Not the Résumé

Hiring a future-ready CEO begins by anchoring the search in the company’s future context, not its past. Even amid uncertainty about enterprise vision or strategy, boards can clarify the key inflection points ahead: AI and gen AI adoption, technological disruption, shifting competitive dynamics, supply volatility, changing consumer behaviors and evolving stakeholder expectations. With alignment on where the company needs to go, boards can translate those imperatives into the capabilities and traits that matter most.

This future-first approach flips the traditional model on its head. Rather than scanning résumés for pedigree and proven playbooks, boards should ask which capabilities enable success in ambiguous, volatile environments. Leaders who demonstrate learning agility, comfort with ambiguity, narrative-building, energy creation, focus and simplification and sound judgment during structural change or crisis are often better positioned for future performance than those whose accomplishments sit firmly in the rearview mirror.

2. Test for Judgment, Not Just Track Record

Strong results are necessary but not sufficient. Decisions made under pressure or uncertainty reveal far more about readiness than outcomes achieved in stable conditions. Boards should probe how candidates made consequential calls with incomplete information, balanced near-term performance with longer-term renewal, adapted when conditions shifted unexpectedly, elevated their teams and proactively managed reputational risk.

Exploring candidates’ decision narratives—how they weighed tradeoffs, incorporated stakeholder perspectives, learned from customers and competitors and corrected course after missteps—offers insight into how they think, not just what they delivered. This is often the clearest indicator of cognitive agility in volatile environments.

2.

BUILD AI GOVERNANCE INTO WHAT YOU ALREADY DO.

With AI governance now firmly on the board’s plate, Ammanath warns against bolting on an “AI committee” and calling it a day. A serious roadmap should “include clear accountability and oversight structures, policies around ethical use and risk management and regular education and scenario-planning for directors and executives.” Boards should be setting milestones for AI integration, requiring transparent reporting on outcomes and should “continually update frameworks to reflect evolving regulatory and societal expectations,” says Ammanath. In short, treat AI like any other enterprise-wide risk and opportunity.

Rotar is especially wary of isolating AI in its own governance silo. “AI should be a topic in other committees,” he says. “It is really, really embedded in financial performance, in cyber, in business strategy, in talent and leadership.” Along the same lines, he suggests a simple diagnostic question for management: “Who is on point for owning AI in the company?”

If the answer lives solely in the CTO’s office, “I would have alarm bells ringing.” His preferred model is a business president who owns AI as part of delivering outcomes, supported by technology, with the board using the usual tools—strategy sessions, risk reviews, compensation levers—to keep them honest.

AI governance should really be an extension of longstanding fiduciary duties, says Bo-Linn, “AI initiatives [are] one of the fiduciary oversight responsibilities shared by all boards.” At a minimum, she wants directors asking where AI is already deployed, where it’s going next and how management is addressing data privacy, cybersecurity, bias, IP and job displacement risks.

Bulls, always the ex-professor, goes straight to policy and culture. “The red flag for me would be if there was not some written governance,” he says. “It needs to be a written policy of how AI is used.” That includes very practical rules—no feeding sensitive data into personal accounts, clear training on what tools are sanctioned and regular checks on whether the policy has actually made it past the head office and into the line. His view: You can’t have “responsible use of AI” without responsible leadership, and leadership shows up first in the rules you enforce.

AI initiatives [are] one of the fiduciary oversight responsibilities shared by all boards.

—Cheemin Bo-Linn, KORE Wireless

3.

USE AI TO COMPRESS INFORMATION, NOT JUDGMENT.

While you don’t want to hand the keys to the machine, you can and should be letting it make short work of tasks that previously took hours or days. Bulls recently condensed a thick stack of CEO evaluation feedback and fed his notes into a personal, secure GPT, which got the memo about 80 percent of the way. The other 20 percent was “looking at every word and comparing that to my judgment,” he says. “What would’ve probably taken me two days to do, I was able to do in half a day.”

Rotar is watching board materials evolve in real time. Instead of 100-page decks, leading boards are leaning on AI to surface anomalies, themes and outliers. “Being able to digest information and synthesize information quicker, get to the bottom of what really matters through data, that is quite powerful,” he says, adding that boards are also beginning to use AI as a simulator—“a sparring partner for simulation scenarios,” much like cyber tabletop exercises, to test how activist investors or hostile bidders might behave and what might break under stress.

Beliveau-Dunn points out that this shift is already built into the vendor ecosystem. “All traditional board portals like Diligent now have AI versions that help summarize data, spot trends and help with workflows,” she says.

Compensation and audit advisors are plugging in as well, she adds. AI can help streamline data on benchmarking and gets to a quicker analysis of how companies compare against each other. “It can also make summary recommendations based upon benchmarks and also governance standards, but every board should have a compensation advisor involved to aggregate this third-party data using AI tools.”

Thought Leadership By ghSMART

The board’s involvement in AI is no longer a distant strategic consideration. As generative AI accelerates the pace of change across industries, future-ready boards have already begun adjusting how they govern, recognizing that traditional oversight models are too slow and reactive for an AI-driven world. Their challenge is no longer whether to engage but how to actively help organizations move faster without losing control, coherence or trust.

Most boards, however, were not designed for this environment. Many are hearing the same advice: add an AI expert, approve a roadmap, get educated on AI, stand up an AI steering committee. But these moves often miss the mark. The boards that fall behind are not the least informed; they are the ones still governing AI as a technology problem rather than a leadership one.

So what does effective board leadership on AI actually look like in practice?

Here are five moves future-ready boards are making to accelerate AI:

1. They focus on leadership, not technology. Boards do not need deep AI expertise, but they do need access to the right perspectives, both in management and in the boardroom. That means ensuring the company has the right mix of “AI Architects,” who build systems, and “AI Shapers,” who embed AI into how the business actually runs. Increasingly, boards will need to assess leaders against the “SHAPE” leadership capabilities ghSMART has identified—strategic agility, human centricity, applied curiosity, performance drive and ethical stewardship—which help predict whether leaders are equipped to lead AI-driven change (and where gaps may limit impact).

2. They shape the conditions for AI leadership to thrive. AI stalls when organizations are optimized for stability rather than learning and agility.

3. They evolve governance cadences to match the pace of AI. Annual planning cycles are poorly suited to rapidly transformative environments. Future-ready boards are adopting faster, more dynamic oversight rhythms that allow leadership teams to pivot, scale or stop AI initiatives before value and trust are compromised.

4.

DON’T LET CONVENIENCE BLOW UP PRIVILEGE AND CONFIDENTIALITY.

If the technologists are leaning in, the lawyers are quietly tapping the brakes— and for good reason. Bob Profusek, a veteran deal lawyer with Jones Day and lead independent director at both CTS and Kodiak Sciences, has started spending real time with boards on “the downsides of AI in the board context.”

The scenario he worries about is one every busy director will recognize: you have 10 minutes, an 80-page memo and a tempting text box. If you decide to plug it into a GPT for a summary, “you may have just violated a confidentiality agreement,” says Profusek. “You’ve got to start thinking about this stuff.” The risks are numerous. Put deal terms or NDA-protected information in there, for example, and “all of a sudden you have an insider trading issue.”

He’s even more emphatic about recording board meetings so AI tools can draft minutes. “The last thing on earth you want to do is record a board meeting.” Beyond the chilling effect on how people speak, you’ve just created a discoverable transcript of every aside, half-formed thought and unfortunate phrase. “Minutes aren’t supposed to be transcripts or summaries. There’s an art to preparing minutes,” he says. “You’re describing what happened, not what was said.” His answer to vendors promising to “create the minutes in minutes” is simply: “Not on your life.”

Ammanath adds a risk directors are starting to see up close: “shadow AI,” where employees quietly use unsanctioned tools without any regard for the company’s risk posture. She urges boards to pay attention to data security and compliance as new regulations take effect, to workforce impact and retraining needs, and shadow AI. In her view, boards that push management for clear oversight frameworks, defined accountability and a plan for AI governance will be “better positioned to capture AI’s benefits while mitigating its risks.”

Rotar sees a way out of some of this, but not all. He points to the rise of “sovereign AI”—custom, locked-down models using company data—that can act as “corporate cortex” without spraying sensitive information across the public internet. Still, even with better plumbing, the board has to insist on clear boundaries: which tools are used for what, how privilege is preserved and where human lawyers, auditors and advisors remain the final word.

The last thing on earth you want to do is record a board meeting.

—Bob Profusek, CTS and Kodiak Sciences

Thought Leadership By Nelson Mullins

A laws of a company’s state of incorporation have important legal implications for public companies and their directors, officers and stockholders. Why? Because key legal issues affecting corporate governance are generally governed by the incorporating state’s laws. While most companies prefer Delaware, assessing the best fit for a company requires a case-by-case analysis given the many variations that exist between states.

To illustrate, this article provides representative examples of how states can differ, focusing on four areas that frequently impact stockholder litigation.

Books and Records Requests

Stockholders often use their right to make “books and records” requests to investigate potential wrongdoing. Companies may have several bases to reject or narrow requests based on the circumstances, with Delaware courts in particular having issued company-friendly decisions in this regard.

In addition, states can differ on which stockholders are even entitled to submit a request for books and records. While many states permit any stockholder of record to do so, others like Nevada, North Carolina and Texas have further ownership requirements (i.e., percentage of ownership or time holding the stock) to request inspection.

SHOULD YOU APPOINT AN AI EXPERT TO THE BOARD?

Boards are debating whether today’s AI moment justifies devoting one of a limited number of seats to a specialist. The answer is complicated: in some businesses, yes, but the wrong kind of “expert” is as risky as having none at all. The decision turns less on fashion than on how central AI is to the company’s business model and what mix of experience and judgment a candidate brings to the table.

CTS and Kodiak Sciences’ Bob Profusek draws the first boundary line: The job is still governance, not engineering. “We’re not charged with crunching the data. That’s what management does,” he says. And if the board is deciding strategy itself, “you’ve got a problem with the management.”

Some argue for operators who sit at the intersection of AI and business or people. Atos’ Florin Rotar says you do “need expertise” but advises against getting a one-trick pony because “when you’re a hammer, everything starts to look like a nail.” The secret sauce, he says, is a profile with real experience “at the intersection of AI and business or at the intersection of AI and people” because “80 percent of the challenge is more related to people than to technology itself.”

Jeanne Beliveau-Dunn of Columbus McKinnon, Edison International and Crewdie goes further: “You do not need or want a pure AI engineering or research person on your board, that’s too limiting for the broad range of discussions that need to be driven.” Her model director has run businesses in tech and can apply what they know to other industries. “Look for a former GM or operator in a tech company that has cyber and AI knowledge and has worked extensively with it. Refrain from hiring CSOs, CIOs or AI researchers or engineers. They would not have had the scope of knowledge to add value on a board on all topics that the board needs to manage.”

WEIGH WHETHER A DEDICATED AI VOICE MAKES SENSE

Sector and strategy matter. Profusek points out that in advertising, accounting, professional services and data-heavy businesses, AI is now “a two-edged sword” that can be “incredibly positive or incredibly destructive.” In those cases, having a director with deep AI experience can be a logical addition, much like bringing on someone with heavy regulatory or international experience when a company is expanding into a new region. Rotar notes that in some technology-centric companies, AI is already forcing “fundamental reinvention of the company,” changing revenue models, workforce design and sources of differentiation. Where AI is that strategically defining, devoting a seat to someone who has actually led AI-enabled transformations—rather than just talked about them—can help boards separate hype from reality and push management to allocate capital accordingly.

Look for a former GM or operator in a tech company who has cyber and AI knowledge and has worked extensively with it .”

—Jeanne Beliveau-Dunn, Columbus McKinnon, Edison International and Crewdie

KORE’s Cheemin Bo-Linn, who has chaired technology and audit committees, adds that boards have been plugging digital gaps for years by adding directors with broader “digital technology and infrastructure” experience, then loading cybersecurity and now AI into that remit. In her view, at least one director should have enough hands-on digital and AI scar tissue to help frame the right questions and interpret tradeoffs, especially as regulators sharpen expectations and liability around AI misuse.

HOW TAPPING A NARROW SPECIALIST CAN BACKFIRE

The constraint, of course, is that board seats are scarce, and any specialist crowds out other needed perspectives. Most boards, Profusek notes, still need broad judgment more than they need another technician. An AI scientist who cannot engage on strategy, capital allocation, CEO succession or culture may satisfy a checkbox while weakening the board’s overall effectiveness. There is also a social dynamic risk. David Larcker warns that when boards bring in a narrow expert—on AI or anything else—there is a tendency for others to over-defer. Directors “have to be more generalists,” he says; if one person is seen as owning the topic, “how do you onboard them” into the full range of decisions, and how do you avoid everyone else sliding out of their duty of care simply because they don’t know as much?

TO GET TO THE HEART OF THE DECISION, CONSIDER ASKING THESE QUESTIONS:
  • How central is AI to the business model? In AI-intensive sectors, a director with real AI-deployment experience can be worth a seat; in more asset-heavy or commodity businesses, boards may be better off buying expertise as needed.
  • Do we already have digital depth on the board? If technology, data and cyber are already well-covered, incremental value from a narrow AI hire may be modest.
  • Can the candidate contribute beyond AI? Boards should favor directors who have run P&Ls or major functions and can engage credibly on strategy, risk, people and capital.
  • Are we raising AI literacy for everyone else? If the rest of the board remains AI-illiterate, adding a specialist may create more deference than discipline.
  • Could an advisory role suffice? For many companies, Larcker suggests, specialized AI insight can come from management, internal experts or outside advisors, while refreshment focuses on generalists with AI as a “secondary attribute of interest.”

 

The emerging consensus: Appointing an “AI director” can be smart governance where AI is strategically defining and the candidate is an operator who understands both technology and business. But boards that rush to fill a seat with a narrow technologist—or use that appointment as an excuse not to build boardwide fluency—risk solving the wrong problem and creating new ones, just as the stakes are rising.

5.

DEMAND AN AI-NATIVE STRATEGY, NOT AI THEATER.

If there’s one move from management you can comfortably have no patience for, it’s the “we stood up a chatbot; we’re done” school of AI strategy. Bo-Linn has watched many boards wobble between hype and denial. “The potential to transform AI technology disruption into opportunities is often underestimated,” she says. The difference maker is whether the board is willing to push management to reimagine the whole business, not just bolt AI on. At Peritus Partners, where Bo-Linn served as CEO for more than a decade, she “had to reimagine how the organization, the work plan, product development, manufacturing, supply chain, all the eight or 10 different functions, how they would look in an AI world.”

Rotar reaches back to the first internet bubble to make the point stick. In the late ’90s, every company wanted to be a dot-com, and many settled for a brochureware website. The real value went to those that understood the underlying economics, that “the cost of digital distribution was going towards zero and there was an entirely new business model to be had around how you sell things or how people consume media,” he says. Today, “if you’re just having a chatbot, that to me right now is the equivalent of a flat dot-com.” The boards that will win are the ones that grasp that “the cost of cognition is now going towards zero” and those that insist management rethink revenue, differentiation and workforce design accordingly.

Larcker sees AI changing the tone of strategy sessions as boards gain the ability to independently test management’s assumptions against outside data and competitors’ public positions. Directors can now ask: “What assumptions do [our competitors] seem to be making?” or “does this actually make sense?” and have some evidence at hand, he says. That can make for more substantive debates—though he’s careful to flag the boundary: Once the board starts using that information to effectively coauthor strategy, “the board can’t manage the company” without creating legal and governance problems.

Rotar agrees that for some directors, having so much data at their fingertips will make the urge to be “too operational” difficult to resist. “It still needs to be noses in, hands out.” But it also puts them in a stronger position to spot weaknesses in management presentations. “I’ve seen scenarios where the management team is very happy to share all the tables and details in the world knowing that no sane human would be able to digest it all,” he says. “But now they can.”

With the added data, analysis and benchmarking, board members can do the job better. “Because remember, our job is not to answer questions,” says Bulls. “Our job is to see around corners and ask insightful questions.”

I’ve seen scenarios where the management team is very happy to share all the tables and details in the world knowing that no human would be able to digest it.”

—Florin Rotar, Atos

Thought Leadership By Meridian Compensation Partners

Executive compensation has long been used for rewarding performance and retaining talent. When boards only evaluate pay through a reward-and-retention lens, however, they invite scrutiny from investors, proxy advisors and regulators, particularly around alignment with strategy and long-term value creation.

As attention to pay vs. performance intensifies, executive compensation programs and CD&A disclosures send a message far beyond the boardroom, signaling governance quality, strategic clarity and credibility. Effective committees don’t just approve pay programs, they challenge them. They listen to dissent and apply disciplined oversight, judgment and consistency, continually testing alignment with strategy. To guide this process, here are five questions boards should ask their compensation committees:

1. What are we actually rewarding?

Executive incentive plans do more than reward results—they shape behavior. Incentive design influences how leaders prioritize decisions, allocate capital and manage trade-offs. When incentives are misaligned with strategy, they can encourage behaviors that undermine long-term objectives, even if short-term results appear acceptable.

5.

MOVE NOW—OR ACCEPT THAT “INACTION IS A DECISION.”

No one pretends to know exactly where AI in the boardroom will land five years from now. They are, however, quite clear on one thing: standing still is choosing a side. “Inaction is a decision,” Bo-Linn says of boards frozen by risk or regulatory uncertainty. “Those who are on that fence, who are consumed with the risk and have a conservative view like they did with cybersecurity, they’re already behind.”

Ammanath draws a sharp line between boards that “embed AI into enterprise strategy, proactively shape governance and risk frameworks, and cultivate an organization-wide culture of AI fluency and innovation” and those that “only react to AI trends or treat AI as a technology-only issue.” The former group, she suggests, will look prescient in a few years; the latter will look like they missed the turn.

Rotar is blunt about where this goes. Asking how effective a future board will be without AI, he says, is like asking “how effective would a board be today if it wouldn’t use phones and computers?” AI, in his view, will soon be “embedded in the fabric of work” and “an indispensable tool and asset.” The difference will be between boards that pay “lip service” and boards that “truly embrace it”—directors who use the tools themselves, understand their ramifications and are willing to push management to do the hard, sometimes messy work of real transformation.

The most critical lesson is not to underestimate the seismic change AI brings to business. “I’ve been in the business world for almost a half century, and I can’t think of a development that’s felt so revolutionary,” says Profusek. “But we’re in the first inning of at least a nine-inning game. It’s way too early to come to judgments about it. Some things are pretty clear—but a lot of it is still to come.”

AI Leadership Forum

Corporate Board Member’s AI Leadership Forum will bring together large enterprise board members, CEOs and leading experts for a highly interactive, one-day program designed to sharpen strategic judgment and accelerate enterprise readiness.

Participants will pressure-test decisions, debate tradeoffs and leave with practical frameworks, including board dashboards, strategic bet maps, workforce readiness plans.

Thought Leadership By Alliance Advisors

A business professional examines digital financial data on a keyboard
Shareholder Engagement

Projected Certainty

How vote projections guide board decision-making on proxy proposals.

In an era when investors scrutinize every line of a proxy statement and every dollar of dilution, public companies are increasingly turning to vote projections to navigate the choppy waters of proxy season. These probabilistic forecasts, built on data, governance insight and disciplined scenario analysis, while remarkably accurate are not promises of outcomes but powerful decision-support tools. They help boards, counsel and corporate secretaries chart a course that aligns management’s strategic objectives with what shareholders are likely to approve.

Purpose: to provide a shareholder vote sensitivity analysis of potential outcomes of ballot items, including shareholder proposals, approval of equity compensation plans and increases in capital among other proposals.

What a vote projection does: A vote projection is a structured analysis that combines a company’s specific shareholder base, historical voting patterns, proxy advisor firm guidelines and influence and proposal specifics. Typically a company will want to run a few projection scenarios, as many proposals will be reviewed on a case-by-case basis by the proxy advisory firms and shareholders.