Tech warnings against ‘stifling innovation’ are driven by self-interest.
Tech warnings against 'stifling innovation' are driven by self-interest.
The Debate Over AI Regulation: Innovation vs. Control

Almost a week after the Senate’s first AI Insight Forum, the discourse about AI regulation is running hotter than ever. While the session was conducted behind closed doors, we do know a little about what happened:
Elon Musk warned AI could threaten civilization; Bill Gates argued it could help address world hunger; and when Senate Majority Leader Chuck Schumer asked if the government needs to regulate AI, all of the executives present raised their hands. There were also debates about how AI will affect jobs, how bad actors could abuse open-source AI systems, and whether there should be an independent agency dedicated to overseeing AI.
The goal of all of this, of course, is for the Senate to work through how it might want to regulate this fast-moving technology. And while all of the tech executives in the room may have raised their hands in favor of regulation, there’s since been a chorus of takes from industry leaders about how regulation would stifle innovation—and threaten the United States’ position with China—that makes clear the industry would really prefer to continue running free.
In his opening remarks at the hearing, Texas Republican Sen. Ted Cruz rallied against regulation, stating that “if we stifle innovation, we may enable adversaries like China to out-innovate us.” While there’s no doubt that AI has major implications for national security, it also has implications for every other aspect of society and human life. AI is not just the future—it’s a deeply impactful technology that’s been testing current laws and sowing real-world harms for years, from upending copyright law and workers’ rights to cementing discriminatory biases into everything from policing technology to how home loans are approved.
The chorus continued in my inbox. “Heavy-handed regulations will choke our country’s budding leadership in the AI sector and could have a lasting and negative impact on our ability to compete with foreign industry that is accelerating R&D with the support of their own governments,” Muddu Sudhakar, CEO at AI company Aisera, emailed Eye on AI via a representative after the hearing.
- Cboe CEO resigns due to undisclosed relationships, joining BP, CNN,...
- Fed keeps interest rates unchanged in inflation battle, but possibl...
- Max will offer free live sports streaming until March 2024, bringin...
The innovation-over-all argument against regulation was perhaps most on display at the recent All-In Summit, where Benchmark general partner Bill Gurley gave a talk titled “2,851 Miles.” Noting 2,851 miles as the distance between Silicon Valley and Washington D.C., he declared, “The reason Silicon Valley has been so successful is because it’s so fucking far away from Washington D.C,” receiving a roar of applause and a standing ovation.
He was immediately joined onstage by fellow VCs for a discussion, where they proceeded to tear into the idea of regulating AI and laughed that regulation would lead to the government doing code reviews and forcing product managers to travel to Washington to get approval on new software features. Tech executives like Docusign CEO Allan Thygesen and Applied Research Institute CEO David Roberts later lauded the talk on LinkedIn.
As always, it’s important to keep in mind that these VCs—much like executives—have a vested interest in letting AI run wild. Benchmark bills itself as focused on AI startups, and many VCs have already made a ton of money in the space. But their innovation-over-all stance has also found some support in the Senate, which critics credit to Big Tech’s lobbying against AI regulation (or at the very least, their efforts to shape the regulation so it minimally affects—and perhaps even benefits—their incumbent positions).
Elad Gil, an investor and self-declared “short-term AI optimist, long-term AI doomer,” argues that the U.S. needs to let the technology advance and should not yet push to regulate AI. He believes that in the long run, AI is an existential risk for people. However, Gil’s concern is that regulating AI will only send it overseas and fragment the cutting edge outside US jurisdiction. He also believes that regulation would favor big tech incumbents, further strengthening their positions.
While the arguments against regulation center on the potential stifling of innovation and the advantage it could give to foreign competitors, proponents of regulation point to the need for safeguarding against the negative impacts of AI. AI has the potential to revolutionize industries and society as a whole, but it also poses significant risks if left unchecked.
The debate surrounding AI regulation is not just about national security and innovation. It is about finding a balance between the benefits and potential harms of this powerful technology. As lawmakers and industry leaders navigate this complex issue, it is crucial to consider the broader societal implications and ensure that regulation, if implemented, strikes the right balance between fostering innovation and protecting individuals and society from potential harm.
AI in the News
Google unveils Bard integrations for Gmail, Drive, Maps, YouTube, and more.
Called Bard Extensions, the offering enables Bard to find and show users relevant information from across their Google apps. As an example, Google described how this can help users plan a trip, with Bard able to grab dates that work for everyone from Gmail, provide flight and hotel information, show Google Maps directions to the airport, and suggest YouTube videos of things to do at the destination—all within one conversation. The company also announced it improved its “Google it” feature to allow users to easily double-check Bard’s answers.
Google is getting close to releasing its Gemini AI model.
That’s according to The Information, which reported Google has given a small number of companies access to an early version of the conversational AI software. Gemini is Google’s big bet to compete with GPT-4 and is expected to be made available through the Google Cloud Vertex AI service.
Amazon launches generative AI tools to help sellers write product descriptions.
The company claims the tools will help sellers save time and offer customers more complete product information, but there’s reason to be skeptical considering generative AI’s tendency to “hallucinate,” or make things up. EBay also recently released a similar AI tool for writing product descriptions, according to TechCrunch.
Digimarc announces a product for watermarking original content, starting with images.
Many have suggested watermarking content created by AI as a strategy for deciphering between artificially and human-generated works. Digimarc today announced it’s taking the opposite approach with Digimarc Validate, a product for watermarking original content instead. Essentially, it will mark content with a machine-readable “©” that will provide a clear signal of content ownership and authenticity—before any generative AI models have the chance to ingest it.
Eye on AI Research
Big yikes.
Microsoft AI researchers accidentally exposed 38 terabytes of sensitive data while publishing a storage bucket of open-source training data on GitHub, according to research from cloud security startup Wiz, which was shared with TechCrunch. The trove of data included private keys, passwords to Microsoft services, the personal backups of two Microsoft employees’ personal computers, and more than 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees. “No customer data was exposed,” according to Microsoft’s Security Response Center.
The exposure doesn’t bode well for open-source as its role in AI is being fiercely debated, including as part of the Senate’s probe into potential AI regulation. Many security researchers have been ringing the alarm about the security risks of open-source AI projects. And while some believe open-source is critical to democratizing the technology, others argue the seemingly “open” offerings from Big Tech are merely tactics to help these companies own the ecosystem and capture the industry.
Anble on AI
- Billionaire investor Ray Dalio says the AI transformation could create a 3-day workweek.
- Actor Stephen Fry says his voice was stolen from the Harry Potter audiobooks and replicated by AI—and warns this is just the beginning.
- OpenAI realizes that engaging with Europe, rather than threatening it, is the way to get what it wants.
- Top AI institute chair and ex-Amazon exec thinks AI will disrupt employment as we know it—but it’ll make the world wealthier and more skilled.
- 3 investors from Microsoft’s corporate VC arm M12 are striking out on their own with Touring Capital, a new AI-focused firm.
- Spies, scientists, defense officials, and tech founders can’t agree on how to keep AI under control: ‘We’re running at full speed toward a cliff’.
Brainfood
Oops, AI did it again.
A few weeks ago, we wrote about an embarrassing misstep from Microsoft-owned MSN in which a travel guide for Ottawa, Canada, published on its site prominently featured the Ottawa Food Bank as a top tourist attraction—even recommending visitors go with an empty stomach. The article was called out for being extremely insensitive, and now just a few weeks later, it seems the platform has outdone itself.
MSN is in hot water again after publishing a seemingly AI-generated obituary for former NBA player Brandon Hunter who passed away unexpectedly this past week. The headline: “Brandon Hunter useless at 42.”
The rest of the article is no better and reads like a nonsensical game of Mad Libs. It says he “handed away” after achieving “vital success as a ahead [sic] for the Bobcats” and “performed in 67 video games,” noted Futurism.
Another day, another example of AI totally failing at this type of use case. So how many times does this have to happen before executives admit it’s not working?
“We are working to ensure this type of content isn’t posted in [the] future,” Jeff Jones, a senior director at Microsoft, told The Verge after last month’s incident, which we now know wasn’t the last time. And it wasn’t the first either; Futurism has been documenting the publication of wonky AI-generated stories on MSN’s platform since the company replaced its human writers with AI last year.