+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

An unchecked AI could usher in a new dark age

Jul 16, 2024, 18:38 IST
Business Insider
Adobe Stock; iStock; Rebecca Zisser/BI
  • Now is the time to start creating new laws around generative AI, tech-law experts told BI.
  • A new "dark age" could be on the horizon if the industry goes largely unregulated, one said.
Advertisement

The dangers of generative artificial intelligence have already begun to reveal themselves — and now is the time to create laws and regulations around the rapidly advancing technology, tech-law experts told Business Insider.

One legal expert warned that AI could usher in a modern-day "dark age," or a period of societal decline, if the industry isn't regulated.

"If this thing is allowed to sort of run away, without regulation and without compensation to those whose work it's using, it basically is a new dark age," Frank Pasquale, a law professor at Cornell Tech and Cornell Law School, said.

He said that AI "pre-stages" a new dark age and could result in the "complete evisceration of incentives to create knowledge in many fields."

"And that's very troubling," he added.

Advertisement

With the growing popularity of AI tools like OpenAI's ChatGPT and Google's Gemini, experts said that social media — which has been largely unregulated for three decades — should serve as a cautionary tale for AI.

The use of copyrighted work to train the technology is a key concern.

Authors, visual artists, news outlets, and computer coders have filed lawsuits against AI companies like OpenAI, arguing that their original work has been used to train AI tools without their permission.

And while there is no uniform federal law that addresses the use of AI in the US, some states have already passed AI-focused legislation. Congress has also been exploring ways to regulate the technology.

AI regulation, Pasquale said, could prevent many of the problems that could pave the way to this new "dark age."

Advertisement

"If uncompensated and uncontrolled expropriation of copyrighted works continues, many creatives are likely to be further demoralized and eventually defunded as AI unfairly outcompetes them or effectively drowns them out," he said.

Many people will think low-cost automated content is a "cornucopian gift," Pasquale said, "until it becomes clear that AI itself is dependent on ongoing input of human-generated works in order to improve and remain relevant in a changing world."

"At that point, it may be too late to reinvigorate creative industries left moribund by neglect," he said.

'The dangers are enough now' to put regulations in place

Mark Bartholomew, a University at Buffalo law professor, said he's concerned that AI in the future will generate "so much content, from artworks to advertising copy to TikTok videos, that it overwhelms contributions from real human beings," but for now he's more worried about AI being used to distribute misinformation, create political and pornographic deepfakes, and scam people by simulating other people's voices.

"It would be dangerous to say we know now in 2024 exactly how to handle AI," Bartholomew said. He added that putting too many regulations in place too soon could stifle the "promising new technology."

Advertisement

"My personal opinion is that the dangers are enough now that we need to come in and at least have some specific regulations to deal with things that I think we're already realizing are real problems," Bartholomew said. "It's not like AI will shrivel up and die if we put real teeth into laws saying you can't use AI for political deepfakes."

Intellectual-property laws related to copyright infringement and state-level publicity rights are among the legal frameworks being used to potentially regulate AI in the US.

Harry Surden, a professor of law at the University of Colorado Law School, said that new federal laws should be created to govern AI, but he warned against acting too hastily.

"We're really bad at predicting how these technologies come out and the problems that arise," Surden, who is also the associate director of Stanford University's CodeX Center for Legal Informatics, said. "You don't want to do this quickly or politically or haphazardly."

"You might wind up hurting all the good along with the bad," he said.

Advertisement

Both Bartholomew and Pasquale said that the lack of regulation around social media and the light touch lawmakers have largely taken since its inception should serve as a lesson for dealing with AI.

"It is a cautionary tale," Bartholomew said, adding: "We've waited too long to get our hands on social media, and it's caused some real problems."

"We just haven't been able to find the political will to do much of anything about it," he said.

Pasquale said that when social media first came about, people didn't really anticipate "how badly it could be misused and weaponized by bad actors."

"There's really a precedent in social media for regulation, and doing it sooner rather than later," he said.

Advertisement

Surden said that early discussions regarding the regulation of social media "largely failed to predict other main issues about social media that we are worried about today that many today consider to be more significant."

Those issues include how social media affects young people's mental health and how it propagates misinformation, he said.

He said that we could regulate social media now, but it's not clear what the effective legal solutions are for the societal problems it has created.

"We, as a society, are not often as accurate at predicting issues ahead of time as we like to think," Surden said.

"So there is a similar lesson about AI. We can certainly see issues of today that we need to be careful about, including privacy, bias, accuracy," he said. "But we should be humble about our ability to predict and preemptively regulate AI-technology problems ahead of time, as we are often quite bad about predicting the details or the societal impacts."

Advertisement
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article