[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

A Pulse Check on AI in the Boardroom: Balancing Innovation and Risk in an Era of Regulatory Scrutiny

From supply chain disruptions that cost millions of dollars to cyberattacks that paralyze operations, the consequences of technology gaps are creating escalating risks. As these risks mount, directors are facing increased pressure to guide AI adoption responsibly while mitigating risks in governance, compliance and security. The challenge is twofold: how to leverage AI as a driver of competitiveness, while ensuring it doesn’t create new vulnerabilities in governance, compliance and security.

In new research from Diligent Institute and Corporate Board Member, the Director Confidence Index (DCI) shows that directors are embracing innovation. Nearly all respondents confirmed they are experimenting with AI in some form, and only 2% have ruled it out entirely, signaling a striking openness in boardrooms, which are typically known for caution.

However, enthusiasm is racing ahead of governance, with the findings showing that only 22% of boards have adopted formal AI ethics or risk policies, leaving many companies exposed at a time when regulators, investors and stakeholders are demanding accountability.

Also Read: AiThority Interview with Tim Morrs, CEO at SpeakUp

Directors Put AI at the Top of their Strategic Agendas

For decades, priorities including capital strategy, M&A and shareholder engagement dominated board agendas. Now, AI has catapulted to the top. According to the DCI, 64% of directors see AI adoption and advancement as their number one strategic priority, ahead of mergers and acquisitions (58%), supply chain diversification (36%), or workforce strategy (15%).

This shift underscores the recognition that AI is not just a technology upgrade, but a business transformation issue with long-term implications for competitiveness and resilience.

How AI Is Being Used in the Boardroom

Two-thirds of surveyed directors say they are already using AI in their board work, from “dabbling” to regular use, such as meeting preparation (50%), intelligence summarization (39%), and benchmarking (26%). The more advanced uses, like predictive analysis or real-time risk monitoring (13%), remain limited but still highlight AI’s potential to reshape oversight.

What’s concerning is how directors are experimenting: the survey found that 46% are using consumer-facing open-source platforms like ChatGPT or Gemini, often without company approval.

In the DCI report, Keith Enright, AI and data privacy leader at Gibson Dunn, warns this creates governance risks: “There may be board directors who take a 700-page board packet, dump it into ChatGPT, and ask for a five-page overview… That use, under those circumstances, likely creates unintended risks and should be avoided.”

These risks involve waiving attorney-client privilege, exposing sensitive company data and failing to comply with emerging AI governance requirements.

From Risk Awareness to Risk Management

Based on the survey, directors see three categories of risk as the most urgent:

Related Posts
1 of 17,650
  1. Data privacy and security: AI increases the potential for breaches, both through new attack surfaces and inadvertent exposure of sensitive materials
  2. Regulatory uncertainty: With global rules evolving quickly, compliance gaps are a growing concern
  3. Bias and ethics: Stakeholders expect companies to demonstrate fairness and accountability in AI use, making reputation a critical factor.

These risks align with broader governance pressures, such as regulators demanding more board-level accountability on AI programs’ impact on consumers’ wellbeing and cybersecurity, investors asking more questions about AI ethics, and employees and customers scrutinizing how companies deploy AI technologies responsibly.

The boards that are making progress with these growing pressures are moving beyond awareness into more informed risk-mitigation strategies. A few examples of early best practices include:

  • Creating clear AI governance structures that assign ownership across IT, legal, compliance and the overall business
  • Requiring transparency and explainability in AI systems, ensuring directors understand how decisions are made
  • Embedding ethical frameworks that set guardrails for AI development and use
  • Providing directors with secure, approved AI tools rather than leaving them to experiment with consumer platforms.

For the boards that take these steps, they’re positioning themselves not only to meet the growing regulatory expectations, but to also be recognized among industry peers for governance leadership. For example, The Wall Street Journal’s recent Top 250 Directors Report highlights this performance, showing how directors are being evaluated by financial and strategic outcomes, as well as their ability to manage emerging risks.

As Enright notes, “Making the right tools available and educating directors on their appropriate use will become best practice over time.”

Using AI as an Oversight Tool

It’s important to recognize that AI is not just a risk. It also enables strong governance. There are tools available to help directors identify emerging risks by scanning regulatory and market signals, prioritizing material risks based on data-driven analysis and streamlining reporting through AI-powered dashboards that shift board discussions toward proactive oversight.

Directors don’t need to be AI experts, but they do need to ask the right questions and hold management accountable for responsible AI deployment. In a world where regulatory, security and ethical expectations are intensifying, the choices boards make today will shape the trajectory of AI in governance for years to come.

Directors who rise to the challenge, by embracing innovation while safeguarding trust, will not only steer their organizations through disruption but also set the standard for responsible governance in the AI era.

About The Author Of This Article

Dottie Schindlinger, Executive Director, Diligent Institute

Dottie Schindlinger is Executive Director of Diligent Institute, the corporate governance research and programs arm of Diligent – the leading AI-powered provider of secure board communication and governance, risk and compliance software. She co-authored the book, “Governance in the Digital Age: A Guide for the Modern Corporate Board Director,” co-hosts, “The Corporate Director Podcast,” and co-created Diligent’s certification programs for directors, including AI Ethics & Board Oversight. Dottie was a founding team member of the tech start-up BoardEffect, acquired by Diligent in 2016. Currently, Dottie serves on the boards of the Foundation for Delaware County and the Pennsylvania School Safety Institute (PennSSI). She is a guest lecturer at the MIT Sloan School of Business Executive Education program and a fellow of the Salzburg Global Seminar for Corporate Governance. She is a graduate of the University of Pennsylvania and lives in suburban Philadelphia, PA.

Also Read: Cognitive Product Design: Empowering Non-Technical Users Through Natural Language Interaction With AI-Native PLM

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.