Adopting AI Technology With Data Privacy In Mind

online data protection and information security concept, cybersecurity
AdobeStock
As boards navigate the integration of GenAI into business operations, balancing innovation with adherence to evolving data privacy laws becomes a strategic imperative. BDO’s Amy Rojik offers a deeper dive into the complexities.

In recent months, pressure to adopt generative AI technology into their operations while adhering to changing privacy regulations in the U.S. and the European Union have created growing concerns for corporate boards and their management teams. Corporate Board Member columnist Matthew Scott recently spoke with Amy Rojik, head of BDO’s Center for Corporate Governance and board member for the Association of Audit Committee Members, Inc. (AACMI), who shed insight on these topics that are likely to be on many corporate board agendas in 2024. The following is an edited version of a portion of that conversation:

For most companies, is dealing with changing data privacy regulations more important than incorporating generative AI into their operations?

It depends on the nature of the organization. Penalties for noncompliance with privacy rules are very expensive. And from a state level, from a federal level and on a global level, you have a lot of laws and regulations that really need to be understood by companies and their boards. It’s really important for companies to realize where their data may be implicated. For example, companies that may not have operations in certain jurisdictions, but under Privacy Laws, they may be serving clients who reside in those locations and be subject to those rules. So, really having a good grasp about where your company is susceptible to laws and regulations is critical. And that’s kind of an all-encompassing thing for a lot of companies.

Companies need to be transparent about how they collect and use consumer data. That means providing clear privacy policies, obtaining consent for data collection and allowing consumers to control their personal information. Companies need to elevate data privacy standards along with ethical AI integration policies, controls and systems for monitoring compliance as it’s not simply a legal requirement. Their privacy and data protection program must also be strategic to enhance their brand, as well as provide stakeholder protection. Consumers want to be comfortable that the companies they’re doing business with are using technology appropriately and are going to protect them and keep their information safe. Safeguarding privacy becomes a real issue for reputation and brand loyalty. If a company proves vulnerable, consumers may choose not to do business with them anymore. For boards, this is another reason for prioritization of data security mechanisms to really protect company data from exposure. This starts with understanding what the company’s data assets are and how such information is being used.

How should companies handle data privacy issues when third party suppliers they use are involved?

Companies have focused on privacy issues by emphasizing prevention of employee errors or exposure of data internally. But we’ve discovered that’s only part of the problem as business has become more interrelated through different supply chains. During COVID, supply chains were disrupted and organizations had to pivot and find other suppliers to work with. In the scramble to do that, companies may not have put their new suppliers through the proper rigor that they would have under normal conditions. That can potentially invite significant vulnerability into their supply chain.

Companies must think about this from their shareholders’ perspective. Boards should ask management, “Who are we doing business with? Who’s in our pipeline? Who’s in our supply chain? Where are our vulnerabilities?” Whether it involves procurement or their IT department, there should be collaboration within the organization to make sure that privacy controls are functioning properly.

Boards should also be asking management, “What is our policy and procedure for vetting suppliers within our supply chain?” If there are currently private or foreign companies in your supply chain, they may not have the same standards regarding privacy and data governance because they don’t have the same regulatory requirements that a public company does. You might also have organizations outside of the US in your supply chain that have different regulatory requirements, and not all the same privacy laws as in the U.S.  Boards need to be vigilant in their oversight of compliance with laws and regulations. Managing compliance with laws and regulations in different jurisdictions can be extremely daunting. If there isn’t a level of comfort that management has the resources to do what they need to do to comply, then that’s when boards need to act.

What are the top issues boards and management need to consider when incorporating AI into their business structure?

How to approach AI as an organization: Take a multidisciplinary approach from the board down that includes management, IT and internal audit, and establish policies. Ask, “What are we currently doing? Why are we considering AI? Is there a competitive reason to incorporate AI? Can AI help generate revenue? And, as importantly, ask, “What happens if we don’t adopt AI?”

Consider how AI fits into the business strategy. This essentially requires consideration of AI through your ERM framework. If you decide to move forward, you’ll then need to determine appropriate guardrails—e.g., policies, procedures, monitoring systems and communications to ensure compliance proper integration and mitigation of risk. Next, the board and management will need to be on the same page when determining priorities as there are likely to be multiple opportunities for AI use and incorporating AI can be extremely expensive. The acceptable use(s) of AI by the organization should be taught to everyone in the organization.

When it comes to AI innovation, there are ways to do this in a protected environment. For example, when using generative AI, there are guardrails you can establish for the proper use of data used for training AI. Ensure transparency and ethical guidelines are followed and monitor for misuse, human error or unintended consequences. Build in accountability and consent policies, and continually educate employees and stakeholders on evolving AI risks and opportunities.  Although nothing is totally foolproof, these are some of the protective and preventative actions you can do to use this technology in a safe manner. Boards should further ensure management is making concerted efforts to mitigate any biases in datasets and algorithms that are being used to bolster integrity over their data. The monitoring process should further ensure that clear goals and objectives are established to measure against and allow for identification of any necessary improvements.

The board looks to management to enhance the business through AI, but it becomes a priority and a responsibility of the board to make sure it exercises proper oversight. AI presents a strategic opportunity, but it’s also a huge risk to the organization if not used properly.

What have you seen companies put in place regarding how AI is being implemented within their business strategies and company operations?

I’ll focus on generative AI because that’s new content being created from existing information. Companies are trying to get their arms around what to do with the technology. We’ re seeing companies use it mostly in marketing or taking documentation and making it much easier to put into a in a different format. They’re taking all their internal data and putting it into a safe environment where their employees can utilize it. Then they can pull in information from the outside world, and construct guardrails that prevent whatever they create from going back out into the broader universe.

There are a lot of applications that allow companies to do this. And it’s not that costly, especially as more organizations enter the world of AI. However, we are seeing a lot of companies purchasing AI applications, whether they’ve been securely tested or not. Some have obviously come out before they were truly vetted, so companies need to be wary. They should be clear about whether the technology fits with their strategy and core values, and they should understand how they are utilizing it as an organization. Does it align with what they’re trying to do?

Is there anything else about data privacy and generative AI that boards should understand?

Management and the board should really understand how their employees are utilizing generative AI. Where is it incorporated into their everyday operations? And do employees have a good understanding of how easily client information or other sensitive documentation can get leaked out to the marketplace?

Companies need to take important preventative steps that include policies and procedures but need to emphasize continual communications and education about the risks associated with AI use. Above all, the board should make sure management has taken all the appropriate steps before greenlighting any going full steam ahead signal.


  • Get the Corporate Board Member Newsletter

    Sign up today to get weekly access to exclusive analysis, insights and expert commentary from leading board practitioners.
  • UPCOMING EVENTS

    JUNE

    13

    AI Unleashed: Oversight for a Changing Era

    Online

    SEPTEMBER

    16-17

    20th Annual Boardroom Summit

    New York, NY

    MORE INSIGHTS