Risk In The AI Era

Business man using smartphone and computer that show warning sign after got attack phishing email by malware from hacker
AdobeStock
From cyberattackers wielding artificial intelligence-enabled arsenals to hallucinations that produce flawed outputs, the risks this transformative technology is bringing to the business world are as vast as its potential. Experts shared insights on preparing for them at a recent director forum. Some takeaways.

To get a sense of just how dramatically artificial intelligence is reshaping the cybersecurity risk landscape, one has only to look at the source of the fraud most of us know best—emails that purport to be from a family member, friend or coworker but are really a ploy to defraud a company or access its systems. Such scams have grown steadily more sophisticated over the years, and the use of AI to create more convincing communications and impostors is bringing them to an entirely new level, Timothy Howard, a partner at Freshfields, told directors gathered for a recent Corporate Board Member forum on artificial intelligence held in partnership with Freshfields.

Raising Cyber Risk

“Imagine how effective a fraudulent email can be targeting an individual when AI’s ability to quickly digest someone’s social media profile and background is used to craft that perfect message that lures them into clicking on a link or downloading an attachment,” said Howard. “Once one employee is compromised, you have access to their entire inbox. AI tools can then be used to ingest that inbox and analyze the history of communications and style of messaging to conduct an effective internal spearphishing campaign.”

And that’s just the beginning. The integration of AI into cyberattacks has significantly strengthened the capabilities of bad actors across the cybersecurity landscape. “AI is creating additional cyber risk as a force multiplier, enhancing the ability of threat actors to get into systems, maintain access, exploit information networks and circumvent various defensive measures,” said Howard.

The use of AI in malware is particularly alarming. These AI-enhanced threats can adapt to evade detection and maintain persistence within compromised systems. “We’ve also seen reporting regarding malware that can use AI capabilities to understand when it is being detected… to change slightly so that they can maintain persistence and avoid being removed from the system,” said Howard.

AI has also lowered the bar for cyberattackers, noted Brock Dahl, a partner at Freshfields and former deputy general counsel of operations at the National Security Agency, who points to the relative ease with which an attack can be carried out. “It used to be that developing sophisticated malware that would be used to infect a system or otherwise initiate a ransomware incident took special skill sets, the ability to write that type of code,” he said. “Now, some of those basic skills can be done by machines, and you can program the machine or really just ask the machine to perform some of those functions. So you have a broader set of people getting involved in these types of activities. And when you look at the statistics on malware events, ransomware events and these types of threats, many of which are becoming increasingly enabled by by these tools, the numbers are pretty staggering.”

Innovating with AI

From a governance standpoint, the emergence of AI capabilities demands a proactive approach to assessing their impact on cybersecurity vulnerabilities, said Dahl, who urged directors to establish a robust governance framework that addresses both the challenges and opportunities of AI. “It’s not just that you have threat actors using these capabilities,” he said. “It’s that your own enterprise is using tools and products and services or creating products and services in new and different ways. So, how do you think about managing that and overseeing it in a space where a lot of those risks are hard to predict or otherwise unknowable just because of the nature of these technologies?”

For board members and C-Suite leaders, understanding AI’s transformative role in cybersecurity is fast becoming a critical requisite for safeguarding digital assets and ensuring corporate resilience. Directors whose companies are on the cutting edge of developing AI or using AI tools in new and different ways need to ensure that incorporating security and data privacy measures is given appropriate consideration in the race to be a leader in AI adoption. Those layering their capabilities on models provided by third parties need to understand how that underlying model works and what that partnership means in terms of control of data provided or shared.

Brock shared three aspects of governance and risk management that boards can employ as their companies explore AI:

Visibility—understanding the nature and content of data being collected and used. Given the sheer volume of data involved and related security exposures, boards need to ascertain the quality of data being used, the processes that it goes through and how that information will be used. This includes considerations around privacy laws and the need for a deep understanding of data sets and usages.

Testing and replication—understanding your internal processes or what your external providers are doing with data. “Replicating and testing these processes becomes critical to ensuring reliability and interrupting potential risk cycles triggered by problematic data or feedback loops,” explained Dahl.

Auditing results—checking that the output matches expected results or appears reasonable. “A lot of the challenge here is the inscrutability of what goes on inside the box of this AI functionality because of the sheer mass of what’s going on,” said Dahl. “There are already examples of problematic results that relate to security and other features about data producing [flawed] results…,” he said. “But you can interrupt that type of risk by having visibility into your data and its use, as well as independent ways of measuring what outcomes you’re expecting for your particular product or service and being able to verify those.”


  • Get the Corporate Board Member Newsletter

    Sign up today to get weekly access to exclusive analysis, insights and expert commentary from leading board practitioners.
  • UPCOMING EVENTS

    SEPTEMBER

    16-17

    20th Annual Boardroom Summit

    New York, NY

    NOVEMBER

    13

    Board Committee Peer Exchange

    Chicago, IL

    MORE INSIGHTS