The other night I attended a press dinner hosted by a company called Box. Other guests included executives from two data-driven companies, Datadog and MongoDB. Usually, the leaders at these parties are on their best behavior, especially when the discussion is recorded, like this one. So I was surprised by an exchange with Box CEO Aaron Levie, who told us he had a tough dessert stop because he was flying that evening to Washington, DC. He was heading to a special interest marathon called TechNet Day, where Silicon Valley can meet dozens of congressional creatures to shape what the (uninvited) public you will have to live with. And what did he expect from this legislation? “As little as possible,” Levie replied. “I will be solely responsible for shutting down the government. »
He joked about it. Kind of. He went on to say that while regulating obvious abuses of AI like deep fakes This makes sense, it’s far too early to consider constraints like forcing companies to submit large language models to government-approved AI agents, or scanning chatbots for things like bias or the ability to hack actual infrastructure. He mentioned Europe, which has already adopted restrictions on AI as an example of what not TO DO. “What Europe is doing is quite risky,” he said. “In the EU, there is a belief that if you regulate first, you sort of create an atmosphere of innovation,” Levie said. “This has been shown empirically to be false.”
Levie’s remarks run counter to what has become a standard position among Silicon Valley AI elites, like Sam Altman. “Yes, regulate us!” they say. But Levie notes that when it comes to knowing exactly what the laws should say, the consensus falls apart. “As a tech industry, we don’t know what we’re actually asking for,” he said. said Levie. “I haven’t been to a dinner with more than five people working in AI where there is a single agreement on how to regulate AI.” It’s not important: Levie thinks dreams of a massive AI bill are doomed to failure. “The good news is that the United States will never be able to coordinate in this way. There simply will be no AI law in the United States.
Levie is known for his irreverent chatter. But in this case, he’s simply more outspoken than many of his colleagues, whose “please regulate us” stance is a form of sophisticated farce. The only public event at TechNet Day, at least as far as I could see, was a panel broadcast live discussion on AI innovation featuring Kent Walker, Google’s president of global affairs, and Michael Kratsios, the newest U.S. chief technology officer and now an executive at Scale AI. The feeling among these panelists was that the government should focus on protecting American leadership in this area. While admitting that the technology carries risks, they argued that existing laws largely cover potential nastiness.
Google’s Walker seemed particularly alarmed that some states were developing AI legislation on their own. “In California alone, there are 53 different AI bills currently pending before the legislature,” he said, and he wasn’t bragging. Walker knows of course that this Congress can hardly keep the government itself afloat, and the prospect of both chambers successfully juggling this hot potato in an election year is as remote as Google rehiring all eight authors of the article on the transformer.
The US Congress has pending legislation. And the bills keep coming – some perhaps less significant than others. This week, Rep. Adam Schiff, a California Democrat, introduced a bill called the Generative AI Copyright Disclosure Act of 2024. It requires large language models to submit to the copyright office “a sufficiently detailed summary of all copyrighted works used…in the training dataset.” It is not clear what “sufficiently detailed” means. Would it be okay to say “We just scraped the open web?” » Schiff’s colleagues explained to me that they were in the process of adopting a measure as part of the European AI bill.