The United States has intensified its push for AI deregulation, with Vice-President JD Vance leading the charge at the Artificial Intelligence Action Summit in Paris. The move comes amid a global debate over balancing innovation with accountability, as major tech firms lobby against proposed European Union AI liability laws.
US Prioritizes Innovation Over Regulation
Speaking before an audience of policymakers and industry leaders, Vance criticized Europe’s regulatory approach, warning that excessive oversight could stifle technological progress.
“We believe that innovation should be free from unnecessary restrictions,” Vance declared. “While we recognize the need for safety, we reject efforts to turn AI into a tool of censorship or bureaucratic control.”
READ MORE: High-Performance Computing at a Crossroads: The Race for Technological Sovereignty
The remarks came in response to the EU’s now-shelved Liability Act, which would have held AI developers accountable for damages caused by their systems. The proposed laws faced strong opposition from US tech firms, who argued that such measures would create financial and legal roadblocks to AI advancements.
China’s AI Boom Raises Stakes
Behind the US stance is a growing concern over China’s rapid AI advancements, particularly after the recent release of DeepSeek, a chatbot that outperformed its American rivals in user downloads. The bot’s success has sent shockwaves through Silicon Valley, intensifying calls for policies that prioritize AI competitiveness over regulatory constraints.
“DeepSeek’s rise shows that AI leadership isn’t guaranteed,” said Dr. Mark Ellison, a senior analyst at the Global Tech Institute. “The US is responding by doubling down on AI investments, while also pushing back against policies that could slow down progress.”
Big Tech’s Influence on AI Policy
Industry leaders, including OpenAI’s Sam Altman and Google’s Sundar Pichai, have lobbied US policymakers to resist stringent AI regulations. Many of them were present at the summit, reinforcing their argument that market-driven self-regulation is sufficient to prevent AI-related harm.
“Tech companies want a system where they call the shots,” said Jeannie Paterson, a law professor at the University of Melbourne and an AI policy advisor to the Australian government. “They argue that they already have safeguards in place, but history shows that corporate self-regulation often fails to protect consumers.”
Critics point to issues such as deepfake misinformation, biased hiring algorithms, and privacy breaches as examples of AI risks that warrant stronger oversight.
Europe’s Regulatory Influence on Australia
Australia has taken a cautious approach to AI regulation, watching international developments closely. While the US remains the global leader in AI technology, Europe’s regulatory framework could shape Australian policy due to the so-called “Brussels effect.”
“In many cases, companies comply with EU standards simply because they want access to the European market,” Paterson explained. “That means AI regulations set in Brussels could indirectly influence Australian policies, even if we don’t formally adopt them.”
Australia signed onto the summit’s final statement on AI safety, alongside most European nations. However, analysts say the country faces a choice between adopting stricter EU-style regulations or following the US in taking a more hands-off approach.
AI Liability and Consumer Protection
One of the biggest divides between the US and Europe is the issue of liability when AI systems cause harm. In the US, companies are largely shielded from legal responsibility for AI-related damages, leaving consumers to pursue individual lawsuits against bad actors.
By contrast, European regulations seek to hold AI developers accountable, requiring transparency in AI-generated content and banning controversial applications such as facial recognition tools that claim to detect emotions, mental health conditions, or sexual orientation.
“These AI applications are not just invasive; they’re often inaccurate,” Paterson said. “Europe has taken a strong stance by banning them outright, whereas the US approach allows the market to decide.”
The Future of AI Regulation
As the AI arms race accelerates, global leaders must navigate the fine line between fostering innovation and ensuring ethical safeguards. With the US prioritizing deregulation and Europe advocating accountability, Australia and other nations will have to determine which path to follow.
“The debate over AI regulation is far from settled,” Ellison said. “What’s clear is that the decisions made today will shape the future of AI for generations to come.”