scorecard
  1. Home
  2. artificial intelligence
  3. news
  4. More and more big companies say AI regulation is a business risk

More and more big companies say AI regulation is a business risk

Lloyd Lee   

More and more big companies say AI regulation is a business risk
  • Some of the top names in artificial intelligence, including Sam Altman, have called for AI regulation.
  • Some Fortune 500 companies worry that the uncertainty around future laws could be bad for business.

As tech leaders squabble over artificial intelligence regulations, their companies' legal departments are highlighting the business risks that the patchwork of early-stage rules could pose.

DeepMind CEO Demis Hassabis, OpenAI cofounder Sam Altman, and even Elon Musk have called for varying degrees of guardrails that they believe could keep the technology from running amok.

Tech-law experts previously told Business Insider that an unchecked generative AI could usher in a "dark age" as copyrighted work is replicated by models, disincentivizing original work, and misinformation is easily created and distributed by bad actors.

As tech leaders and policymakers figure out what those safety measures would actually look like, more Fortune 500 companies are underscoring regulations' possible business risks.

An analysis from Arize AI, a startup that helps companies troubleshoot generative AI systems, found that 137 of the Fortune 500 companies — or about 27% of Fortune 500 companies — identified AI regulation as a risk to their business in annual reports filed with the Securities and Exchange Commission, as of May 1.

And the number of Fortune 500 companies that listed AI as a risk factor soared nearly 500% between 2022 and 2024, per Arize's data.

In these annual reports, companies cited costs that could arise from complying with the new laws, the penalties that could come from breaking them, or rules that could slow down AI development.

To be clear, they're not necessarily saying that they oppose AI laws. Instead, the concern is that it's unclear what those laws will look like, how they will be enforced, and whether those rules will be consistent around the world. California's legislature, for example, just passed the first state-level AI bill — but it's unclear if Gov. Gavin Newsom will sign it into law or if other states will follow.

"The uncertainty created by an evolving regulatory landscape clearly presents real risks and compliance costs for businesses that rely on AI systems for everything from reducing credit card fraud to improving patient care or customer service calls," Jason Lopatecki, CEO of Arize AI told Business Insider in an email. "I don't envy the legislator or aide trying to wrap their head around what's happening right now."

Regulation could slow business

Companies' annual reports warn investors of a long list of possible business hits, from the specific — another wave of COVID-19 — to general risks, like the possibilities of cybersecurity attacks or bad weather. Now, AI regulations feature in that list of unknowns, including the cost of keeping up with new rules.

Meta, for example, mentioned AI 11 times in its 2022 annual report and 39 times in 2023. The company devoted a full page in its 2023 annual report to the risks of its own AI initiatives, including regulation. The tech giant said it was "not possible to predict all of the risks related to the use of AI," including how regulation will affect the company.

Motorola Solutions said in its annual report that complying with AI regulations "may be onerous and expensive, and may be inconsistent from jurisdiction to jurisdiction, further increasing the cost of compliance and the risk of liability."

"It is also not clear how existing and future laws and regulations governing issues such as AI, AI-enabled products, biometrics and other video analytics apply or will be enforced with respect to the products and services we sell," the company wrote.

NetApp, a data infrastructure company, said in its annual report that it aims to "use AI responsibly" but that it may be "unsuccessful in identifying or resolving issues before they arise." The company added that regulation that slows down the adoption of AI could be bad for its business.

"To the extent regulation materially delays or impedes the adoption of AI, demand for our products may not meet our forecasts," the company wrote.

George Kurian, the CEO of NetApp, told The Wall Street Journal that he encourages AI regulation.

"We need a combination of industry and consumer self-regulation, as well as formal regulation," Kurian told the publication. "If regulation is focused on enabling the confident use of AI, it can be a boon."



Popular Right Now



Advertisement