- Since the White House released its Executive Order on AI, issues around the tech have evolved.
- An AI advisor for President Biden told BI there will soon be "legislative discussions" around AI.
President Joe Biden's Special Advisor for Artificial Intelligence has had a busy year.
"There's never a quiet day in AI policy," Ben Buchanan told Business Insider on Tuesday in a 30 minute interview.
Buchanan took on the new AI advisor role last summer after stints in other positions, including on the National Security Council. Since then, President Biden has issued an Executive Order on AI, demanding, in part, greater transparency from companies developing powerful AI tools and outlining actions for various government agencies with regard to the technology.
In the three months since the executive order was issued, the White House has made progress on a number of the directives. However, AI has evolved rapidly in that time, sometimes in troubling ways, like making the creation of deepfakes easier than ever. Generative AI was even used to fake a robocall impersonating President Biden that urged people not to vote.
Buchanan admitted that the White House's power to set standards for AI technology is very limited. "That is not something we have the legal authority to do by executive action," he said. However, he alluded to future regulation around AI, noting that Biden is set to have "legislative conversations here in the month ahead."
Something else that has developed since the executive order came out is the debate around copyright and AI. The US Copyright Office last year opened a public comment period as it considers new rules around generative AI and the training of large language models, which require huge amounts of online content. The likes of Meta, Microsoft, OpenAI, Google, and others use mountains of copyrighted content for free in their model training, but claim that's necessary for the technology to make sense financially. Copyright holders largely disagree.
Buchanan said the White House's doesn't have an official stance on this. But its general priority is to "have an innovative AI ecosystem" while ensuring "the people who create meaningful content are appropriately compensated for it."
For BI's complete conversation with Buchanan, edited for length and clarity, see below:
You alluded a minute ago to it being kind of crazy, but has this AI specific role been very intense compared to your previous roles?
The National Security Council is pretty intense, too. I think probably one thing that's the case is we've been working as an administration on AI basically since we started here in 2021, but the public attention has only increased, and that itself creates more work.
Did the explosion in public awareness surprise you when it happened? I assume it's post ChatGPT?
I think I was probably less surprised than others because I was so familiar with the technology and what it could do. And then the senior leadership here understood the benefits and risks of AI early.
Are there any use cases of generative AI that you find particularly helpful, useful, hopeful? Where you looked at the tech and thought, wow, this is great.
Yeah, there's three levels here. You can look at the individual level, then you can look at the government-wide use level, and then you could look at big picture societal priorities. On an individual level, I don't really use generative AI much at all. Mostly because I sit in the government and spend most of my time working in the government, and we don't, to my knowledge, we don't have particularly developed policies and processes for this. And also because a huge portion of my job is meetings and email. So there's not a ton of reason to use it here, but I'm not sure where it would be appropriate or inappropriate to do so. So I'm not a particularly compelling use case right now, just given that most of my job is talking to people.
Yeah, okay. There's not an avatar Ben taking your meetings yet, is what you're saying.
Exactly. Sign me up for that though. Second, is what are the appropriate federal government uses of this technology? And here I think there's actually a fair amount, and section 10 of the executive order is pretty clear at saying be responsible, but make sure we're innovating here. The Office of Management and Budget is taking the lead here, and this governs how agencies outside of a national security context should use AI. So this is the case where we talk about what are the places where it's pretty high risk to use AI, and we need to really be careful that we're respecting human rights and the like, and what are the cases that it's much lower risk and we should probably be leading pretty aggressively and using AI to deliver better services for Americans.
And then I don't know if you're interested in the last question, which is big picture, what are some of those promising use cases?
Yeah, totally.
Some that I'm really excited about are AI for science and generative AI, but also more generally AI systems in biology and healthcare. Scientific discovery is a priority for the President and I think we're all united here and recognizing the potential for the technology in that area.
Has anything crossed to your desk or come up in any meetings that you had over the past year where you were just like, oh no, this could be potentially bad and you really wanted to get ahead of it?
Sure. I think one example here again, and it illustrates the dual nature of a lot of AI is again, AI for science and in particular AI for biology. Our view is the more we can manage those risks, the more we'll be able to harness the benefits. In the same way that in the early days of the railroad, it was the trains that had brakes and safety procedures that could go much faster than the trains that didn't. I think we want to create an ecosystem here at the intersection of AI and biology, but also more generally at the intersection of AI and society, where we've got the right procedures in place so that we can go as fast as we can safely.
The big conversation about improper uses of AI lately has been around deepFakes, which people have been talking about for years. But do you agree with the developing wisdom that the power behind these generative AI tools now make deepfakes so much easier to create? You need little to no technical know-how now to do something like fake President Biden's voice, et cetera.
Yeah, I don't know exactly where it's at, is it little or no know-how. But yeah, I think as a general view, we recognize that increasingly powerful technology makes it easier to do things like clone voices or fake videos. And we want to not inhibit the creativity that could come from having more powerful tools, while also making sure we manage some of the risks.
And the two places I'd point to here where we've acted on this one, the President received commitments from the 15 leading AI companies in the United States over the summer of 2023, and that included a commitment to build and deploy watermarking systems, especially that when their AI tools generate content, there's a watermark built into it. This is a tricky technical problem and one that's going to take a long time to, not to solve the watermarking question, but solve the question of how do you surface that information to a user in a meaningful way.
And then second, in the executive order, we stand up the AI Safety Institute at the Department of Commerce. It has a number of jobs, but one is to set standards for watermarking content, providence, to help address this. There's another part, we're trying to explore this, it's a longer process, but essentially cryptographically verifying our own communication so that when people see a video of the President on whitehouse.gov, they know this is a real video of the president and there's some signature in there that does that. So I think this is a case where we recognize the potential for harm. We're trying to get ahead of it, and we've been working on this for quite a while.
So, what do you say to criticism of the executive order? That it's a little bit soft, it's a lot of suggestions, even proposals for proposals, about how this stuff should happen and when. And now we're in an election year.
I think certainly I get criticism that the executive order is too aggressive sometimes. I don't know that we've gotten a ton of criticism that it's too soft, but maybe folks think that. I think in the case of deepfakes, it is a question of what is the appropriate role for government here, bearing in mind that there are free speech rights and indeed campaign and election rights and all that.
So what we would probably do is not with reference to elections in particular, but more generally, try to create an environment in which folks who consume content generated by AI know that it's been generated by AI. And we've done that through working with the companies and also through the process of setting standards more generally for the industry. I don't know that the standards are legally compelling, right? I don't know that we have legal authority, certainly not under the executive order, to say these are the standards that every AI provider in the United States needs to adhere to for generating content. That is not something we have the legal authority to do by executive action.
Okay, about that. How can you actually at all compel these companies to report developmental work, report safety results to you? And then what happens if they do and something seems problematic to Commerce? What is the process then of dealing with that, if there is a problem that's flagged?
So the reporting of safety results is a different authority than the watermarking discussion. This is down to the minutiae here, but the watermarking discussion is NIST essentially setting voluntary standards and good practices for 'this is what it should look like to watermark your content.' Now, the companies have already said they are going to watermark their content as part of the voluntary commitments. So hopefully there's some kind of marriage that can happen of good standards set collaboratively with commitments that companies made, and we don't need any legal authority to do it.
Okay. Separate from all of that is, as they're building an extremely capable AI system, what safety obligations do they have? And here we do have an authority in play called the Defense Production Act, which has survey authority, where we can essentially go to the companies and say, we are surveying you for your information about the system you are building. And this authority dates from 1953. This has been used in tons of contexts in different industries to understand essentially what the state of play is in those industries. And that is an authority that is legally backed. They're compelled to turn that information over, whereas the watermarking content providence is not that.
Okay. And so you've delivered those surveys, I assume, to all 15 companies then, just to ensure that they are compelled to give you this information?
Yes. The Department of Commerce handles it. The surveys are out. I don't know exactly who got them or how many companies. So I would defer to Commerce on how they structured who got them. But yes, the surveys are out. It was a 90-day deliverable and we met it last week.
This is good minutiae, by the way.
You're talking, for better or for worse, to the right person.
So two more things since we're running out of time: The AI and copyright debate that's going on and then also regulatory action. Do you or the White House have thoughts on where AI training falls in copyright law? Is it transformative fair use? And then, when you look at something like the EU AI Act, do you foresee the US ever going to that length, really enacting national, federal regulation around AI? Or is it more a kind of wait and see if it needs to happen?
Okay, let's go in order. So the Copyright Office, as you probably know, is actually independent of the executive branch. So we don't really direct them, which is probably how it should be. So I think it's fair to say I don't have strong views here on that particularly matter. There's also the Patent and Trademark Office at the Department of Commerce. We've basically gone to those agencies and asked for insights, but there's not an administration position on that yet in the way that there is in other areas, except to say that our general priorities are making sure that we have an innovative AI ecosystem and making sure the people who create meaningful content are appropriately compensated for it.
Okay. On the regulatory side, I think there's basically two pieces here. The first is what is the EU doing? And there's a fair amount in flux because they've pushed a lot of the authority and details to the office that they're creating. We've had very productive conversations with them and expect to continue productive conversations with them. And that'll be that.
In terms of what we are doing, the President invited the four senators, the "gang of four," to the Oval Office the day after he signed the executive order. They had a good conversation there about broader legislative frameworks. Senator Schumer and that team has had Insight Forums on AI and got a lot of feedback basically from all-comers. I think we will have legislative conversations here in the month ahead. As you also know, Congress has got a lot going on right now, so I wouldn't want to place bets on timing or anything like that. But I think it's fair to say that we recognize we've done a lot with executive authority, and there's some things you can't do through executive authority that we might want to do, and we'll have that conversation with the Congress.