Courage In The Boardroom: Microsoft Lead Director Sandi Peterson

Sandra E. Peterson
© AdobeStock
The attack on Ukraine started long before Russian forces set foot on Ukrainian soil, with nation-state actors reaching through cyberspace to stalk its infrastructure. Fortunately, the country had a powerful ally in Microsoft, whose efforts to defend the country led our selection committee to honor them with CBM’s 2023 Courage in the Boardroom award. Lead director Sandi Peterson shares a boardroom perspective on the tech company’s role in fending off the threat of cyber weapon deployment and on other challenging issues faced by today’s boards.

It’s no secret that the same globally interconnected, transformative technology that businesses deploy to drive growth and gain efficiencies creates vulnerabilities to cyberattack. That it’s no different for nation-states became abundantly clear when Russia’s first foray in its conflict with Ukraine began not with tanks or missiles but a cyber weapon assault aimed at the country’s data centers and infrastructure. That came as little surprise to Microsoft, which has a lengthy track record of identifying and tracking cyber activity and developing cyber detection and defense capabilities. Sandi Peterson, lead independent director, says Microsoft cyber teams reported nefarious cyber activity in Ukraine well before the war began.

“Basically, we started seeing a fair amount of increased activity in Ukraine from third parties, many of them nation-state actors, trying to get into the infrastructure,” she explains. “As a board, we were very supportive of the company working with the Ukrainian government quietly to help them improve the security of their infrastructure.”

For those efforts, and their ongoing work to defend global society against cyber attacks, the board of Microsoft is being honored by their peers with the 2023 Courage in the Boardroom award. “We have spent a fair amount of time as a board on this topic over the last handful of years in terms of making sure we understand what’s happening, and we get updates from the team in a couple of different contexts,” says Peterson. “One is, how are our own capabilities and our infrastructure being hardened to be more secure, but also, what are we seeing with our customers? And what are we doing to enable our customers to manage these things? How and when do we find things, and how do we communicate them in an appropriate way to others, whether it’s with the customer or other authorities where that makes sense?”

CBM recently spoke with Peterson about cybersecurity and other risks and opportunities today’s boards grapple with regularly. Excerpts of that conversation, edited for clarity and length, follow.

What conversations took place in the boardroom when the issue of cyberattacks against Ukraine arose? 

We have this whole system in place that’s sort of been building over a handful of years—it happens in the audit committee because it is a risk association. It happens in our environmental, social and public policy committee environment because it has to do with how we see nation-states acting and what that means for policy issues as it relates to Microsoft. And when we are asked, because many of these governments are also our customers, how do we engage with them and do the right thing and provide them with what they need, but also, where it is appropriate, how do we share that with others, whether it’s other governmental entities, NATO, etc…. So there’s some longer history here that mattered a lot. 

Also, we as a company think it’s our responsibility to draw the world’s attention to some of the risks that are posed by these nation-state cyberattacks and disinformation campaigns. Where it makes sense, we’ve been very public about those things in the right context and in concert with other governments and others who are being affected by this, because a big part of what we needed to do as well during all of is raise the profile about [cyber risk].

That’s an important backdrop because when all of this started occurring with Ukraine, we already had many mechanisms and ways in which we were engaging with the company as they were learning things. So we had conversations about what was going on, what they had learned and how our security teams had been working closely with Ukrainian government officials as well as a lot of our cybersecurity staff. And we also work with a fair number of other organizations to make sure that we’re sharing information, where it’s appropriate, to identify these things and remediate them as quickly as we possibly can.

And obviously, our work, unfortunately, has not stopped, because the targets that these individuals are going after go way beyond just governmental entities or the Ukrainian military and into civilian areas.

We hear a lot about boards and companies trying to find the line between what they need to do for the company and how far they should go to do good in the world. Do you talk about that in the boardroom? 

I would say a couple of things. One is that in many of these cases, these are our customers also. And when we see these things, a lot of these tools and techniques bleed into other parts of our customer base. So we view it as we need to stay ahead of this, and we need to learn and understand what the next set of attacks will be, quite honestly, and where they’ll come from, because it keeps changing.

It’s a little bit of the classic Whac-A-Mole game. For us, we believe that part of our responsibility is to understand what’s going on, find these things, and then make sure that we can secure all of our customer’s estates so that they’re not being negatively impacted. Some of these nation-state tools that are created end up in the hands of cybercriminals who then try to cause all sorts of disruption with many of our enterprise customers.

The other thing I would say is we are also very conscious of what’s appropriate for us to share with others, such as with other governmental entities that are also working on these things, so we can share best practices. And when we see something, do we actually share that information with NATO, the U.S. government or Ukrainian government? We view that as our responsibility.

How did the board approach oversight of cybersecurity in this context?

The question we always ask when we have these updates and reviews start with, what are you—meaning the team—most concerned about? And do they have the resources, do we have the capability to do the things that we need to do? That’s one question we always ask. Then the second one is, what is the implication of what you’re doing and learning for the broader enterprise and the support of the rest of our customer base? Are there things we should be doing, learnings from this that we can take and integrate into our broader offerings to the rest of our customers?

Those are the kinds of things that we talk about pretty extensively. With these reviews, obviously some information is highly confidential and very, very sensitive. Some of it takes your breath away, and it makes you realize the importance of the work being done.

ASSESSING AI

Microsoft’s innovation in the AI and cloud business arenas was also referenced by judges, particularly the investment in OpenAI and Azure. What was the strategic thinking around those bold moves? How have they played out so far?

Microsoft has obviously been very involved in creating these large language models and in AI generally. Interestingly, one of the things that’s a little bit of the breakthrough in a lot of this is the models that were built to do translation, which Microsoft has been very involved in for a number of years. And it’s something we talk about as a company. 

One of the things that we started doing that’s relevant as a board has to do with these two- to three-day intensive, deep dives on the company, the strategy, the technology that we used to do, probably four or five years ago. It was almost overwhelming because it was so much in a short period of time. So we decided, let’s break this up and do these in pieces. Let’s get the best experts to spend time—whether it’s an hour or three hours—with the board on a particular topic. So the board gets much more informed, the company and the experts talk about where these things are going and what matters most. We’ve done these cycles on AI-related things fairly frequently for the last three or four years.

For us, it was a great journey because people have been talking about AI, as you well know, since Turing, and it’s been around since the ’50s. And everybody says it’s tomorrow, tomorrow, tomorrow. What was great for us was that, due to this ongoing work that we were doing, it started becoming very clear that because of the massive compute that we’ve been able to build with Azure and the work that we had done in the beginning of the conversations with OpenAI, that there really was an opportunity for a massive technology breakthrough.

That, of course, was very exciting for all of us because it was a way for Microsoft to be on the front foot of what’s most important in the next technology wave. So when OpenAI itself became a conversation, we were all pretty up to speed on our general thinking of where we were going. This just felt like a very logical thing for us to do, to build this partnership. They needed our computer capabilities, and we needed some of their really deep expertise on these large language models. So it was a great way for us to partner and then bring this to the market in a pretty robust way.

Some of those conversations started happening last fall in a deeper way, and I think part of the discussion in the boardroom is we had a lot of debate and conversation around, “Well, if this really is real, then let’s not be timid about it. Let’s think about how we can use this technology to support not just our search capabilities, but how do we embed it in many of our other offerings, and can we really make this a pivotal moment to rally around this and really deploy it in a much more diffuse way across many more parts of the enterprise?”

The board got very behind that and said that that’s the right thing for us to do, and let’s go make it happen. So we’re all holding hands and jumping together on this one because it makes a ton of sense, and it’s a huge positive for the world if we can get this done the right way.

What’s your take on the controversial aspect of AI and what businesses should be thinking about in terms of risks?

For years we’ve been very public about what we refer to as being responsible and ethically appropriate when any technology is used, but this one in particular. Part of the conversation, and the thing I’m very encouraged about and I think we are generally encouraged about, is that there is a recognition that technology will not always necessarily be used for good. There are “bad actors” in the world that might use AI for reasons that we don’t think are the right reasons. Let’s be super realistic that that’s a possibility, and let’s make sure that there’s a broader conversation and engagement with others to make sure that we can find the right way to put guardrails on this.

Let’s not be naive that some may use [AI] for the wrong reasons, but let’s be very responsible about how we work through all of this and how we make sure it works in the most effective way possible. And when things don’t necessarily work terribly well, let’s correct it. That’s part of how Microsoft has been dealing with this in a very thoughtful way. We’ve been a huge proponent of engagement, as you know, with other governments, other tech companies, to talk about how to deploy these tools in a way that makes the most sense and works the best.

These are, at the end of the day, all productivity tools. Regarding the concerns over all these jobs that may get lost, I’ll give you two interesting things about that. One is, we have something called GitHub, which supports developers. And one of the first deployments of this technology was GitHub Copilot, which enables developers to basically write in normal language things they’re trying to do and get chunks of preset code that’s already been validated.

Developers will tell you that some of their job is terrible drudgery to write long, long [strings] of code. So it’s a way to make their jobs more interesting. Even though maybe there won’t be as many coders in the world in the future, they’ll do higher-level coding and more interesting coding. So that’s an example of improving the quality of the developer’s life, making software development more efficient, which is probably at the end of the day a good thing.

The other one is, when I started out in business, Excel or Lotus 1-2-3 didn’t exist. PowerPoint didn’t exist. You did things with HP 12Cs and you wrote charts by hand and gave it to somebody who had some very kludgy software to do it. Well, that has changed a lot through Excel and PowerPoint. And we’ll use some of these AI tools to actually do another turn on the productivity improvement so people can actually do more thinking and less just mechanical work.

I know some people are concerned about some jobs going away, but other jobs will be created, and hopefully some of those jobs will be more interesting and fulfilling than the ones that are going away. We do talk about all of this. It’s part of the conversation in the boardroom about how we use AI in a way that’s good for society, that is consistent with our mission of empowering others in the world to be as productive and fulfilled as possible. We view this as an opportunity to do that, but with eyes wide open about all the things that we should be thinking about, and to ensure that it really is being used for the right reasons and in the right context. And if something bad happens, we’re the first ones to say we’re going to shut this down and rethink what’s the right way to do that. And we learn from it—it’s all about our learning culture.

Concern about job losses comes up with any automation, but AI also poses other risks, such as bad actors using it as a tool.

I guess the point is you can’t stop other people from using this technology for not necessarily the right reasons. But if you understand it very well, and quite honestly, all the work that we’ve done on monitoring and understanding cyber things, it gives us tools and techniques to help hopefully identify when it’s not being used appropriately and when people are using it for the wrong reasons. And we can “take it down” the way that you do when you’re dealing with cyber issues. So that is the hope, but being super realistic that that may happen and we have to be hypervigilant about it is very important.

Succession planning is one of the principle functions of a board. Can you share a little about what practices you find most effective at Microsoft?

Some of the things that we do are not terribly unique, but there are a few things that are really unique and a great way that we as a board get much more deeply engaged and involved in succession planning. To put it in context, if you look at the size of many [Microsoft] businesses, whether it’s LinkedIn, gaming or Azure, those are massive, complicated Fortune 100 companies in and of themselves. We have very big leaders leading those areas, and they’re very complicated businesses. So one of the things we spend a fair amount of time on is the level two and three below [CEO] Satya [Nadella] because hopefully Satya will be with us for a long time, and who knows whether any of those individuals right underneath him will want to or be able to take on the CEO role when he decides to step down? 

So we spend a lot of time talking about those individuals’ strengths and weaknesses. If they’re ever going to get to the “corner office,” what are the next two or three experiences that they need to have and demonstrate in terms of capabilities and leadership skills? There are a lot of conversations about moving people around at that level and finding the right next roles for them. 

The second part of it is we as a board, through a number of different mechanisms, get to know all these individuals. We have informal lunches, maybe two board members and an individual on one of these lists, just talking about their background, what are they working on, what are they most interested in, those kinds of things.

Another thing we do as a board that I think is fabulous practice is the engagement and transparency [practice] that we have as a board and a company. For example, if Satya or the company is working on some particular topic, he may ask a board member or two board members who may know something about that, “Could you go spend a bunch of time with this team and help them think through whatever it is?”

It’s a way for board members to actually roll up their sleeves and get to know people, almost in a work environment. But it also is a way for the company to get really good advantage out of the diversity of the experience base of the board. I’ve been asked a couple of times to get deeply involved as they were looking at a couple of acquisitions, and I could then, in the boardroom when the team was presenting the acquisition, be another voice in a slightly different way on that topic.

So that’s another way that we get to know the people in a different, unstilted way where it’s not just a bunch of PowerPoints and, “Let me tell you this person’s strengths and weaknesses.” There’s a belief that being exposed to people in lots of different contexts and environments is a really good thing, and there’s no concern about board members roaming the halls at all. 

Another thing that Satya does that is terrific is he gets his senior leaders together, somewhere between 200 and 300 people, once a year. He invites board members to come to that and sit in on anything, hang out at the bar with people late at night and ask them what’s going on and how they’re doing. It’s a way for board members to get a different sense of the individuals in the company, a pulse on the culture and how people are feeling. It’s very much this notion that we’re allowed to sort of roam the halls and whatever we learn is a good thing.

So there are standard ways people handle succession that we do too, but this multifaceted approach we do as a board with Satya and his team gives it a much richer context because we need to think about succession for him but also to some degree for these individuals who are running massive businesses.

That requires a lot of trust on both sides because Satya is giving you access to areas that at other companies—well, sometimes there’s a wall.

Yes, there is. That wall doesn’t exist here. And to your point, it’s very much why the board and the management team have a very constructive, collaborative relationship. Don’t get me wrong, there were times when we’ve been like, “No, this is not acceptable. You know, we need X, Y and Z.” So it’s not that it’s become too close. We understand the distinction here and where we need to play as some kind of an oversight and governance role, but it also enables us to get into conversations.

There are times in the years that I’ve been on this board where it’s like, “We’re not sure what the right answer is. This is the thing we’re grappling with. I would love to get your input.” And that’s a great way to use a board, right? To demonstrate your need to learn from others, to say, “I need to figure out how to do this, and I need your help.”

PINPOINTING PRIORITIES

We’ve been experiencing a movement toward greater focus on purpose in the boardroom. What shifts in conversation or priorities have you seen by boards or among your stakeholder communities?

I always joke that you could put whatever you want on an Excel spreadsheet or in a PowerPoint, but at the end of the day, it’s the people who work in that company that will make it happen. There are lots of reasons why Microsoft has come back in a very strong way and is doing very well under Satya’s leadership. But at the core of all of it is this belief that we need to have a very healthy culture, a collaborative culture, people wanting to show up every day who feel like they’re doing something that’s meaningful and will have a positive impact on the world.

So it has been really the foundation of the transformation that has happened at Microsoft. And we don’t make a distinction—it’s not an “either-or,” it’s an “and.” For us, this idea of culture is very important. People need to feel like they’re being listened to; we have a lot of mechanics in place to sort of make sure that we’re continuing to show up the right way, that leaders are doing that right. There’s a ton of training that’s happened over the last couple of years, particularly during Covid when it was very difficult for leaders to figure out how to be an empathetic leader the right way.

People come to Microsoft because they think that we’re doing the right things the right way, and that’s always been part of our hallmark. We even talk about this in the comp committee. We look at the data, and people come to work at Microsoft and stay because they feel like they’re doing it for a higher purpose, helping the world be a better place.

What other issues or challenges are you seeing boards grapple with these days? What should boards be thinking about and planning for?

I don’t have a crystal ball. All I do know is that people always talk about unprecedented times. But whatever time period you’re living in, you think it’s unprecedented. The point of all of that is that the world will continue to evolve pretty rapidly, both in terms of the economics of countries and what’s going on with governments, what’s happening with technology, healthcare, all sorts of things. So the one thing that I can guarantee is that it will look very different four years from now than it does today.

The most important thing that a board and a company needs to do is to be very open-minded about the need to always be thinking, “What do I need to do differently as the world continues to change?” Don’t get paralyzed by that change, because that’s the one thing that is a constant. It’s changing pretty dramatically, and it will continue to change pretty dramatically. AI and other technologies hopefully will accelerate some of that change in a very positive way on a go-forward basis.

So, we talk about having a learning culture and helping the board and the company as they’re learning things that aren’t relevant to the existing company, whether it’s Microsoft or somebody else, by bringing them into the conversation. Say, “Hey, I heard X, Y and Z. I’m not sure it’s relevant here, but maybe it is. What are the implications of that for how we do things?” And just because whatever we did worked last year doesn’t mean it’s the right thing to be doing this year, because the world’s going to continue to evolve.


  • Get the Corporate Board Member Newsletter

    Sign up today to get weekly access to exclusive analysis, insights and expert commentary from leading board practitioners.
  • UPCOMING EVENTS

    JUNE

    13

    AI Unleashed: Oversight for a Changing Era

    Online

    SEPTEMBER

    16-17

    20th Annual Boardroom Summit

    New York, NY

    MORE INSIGHTS