Gary Shapiro: The War on Platforms is a War on Innovation
The author urges Congress to preserve Section 230 and the legal tradition of regulating bad actors rather than the technologies they use–especially AI.
Gary Shapiro
For decades, U.S. law has drawn a clear line: we regulate bad actors, not the tools they use. Courts affirmed that principle in cases around the VCR, the Internet, broadband and most recently artificial intelligence.
These rulings and wise decisions by leaders in Congress to allow "permission-free" innovation made our country the best place in the world to build and grow a business. Abandoning them now would be a colossal and historic mistake, setting back U.S. innovation leadership, our economy, and our future.
This principle is so vital the U.S. Supreme Court unanimously reaffirmed it last month in ruling that broadband service providers should not be held liable for copyright infringement by users of the service unless they have shown an intent to contribute to or induce the infringement.
The ruling fits within a long tradition in American law, which assigns responsibility for speech to the speaker. The principles and legal history of the First Amendment encourage not just speaking freely, but receiving free speech, and the U.S. Constitution’s copyright clause not only encourages creativity but allows broad public access to the fruits of creation.
Our legal system recognizes platforms like broadband, telecommunications, and cable are tools — like motor vehicles, computers, or recording devices. We don’t hold makers of these tools responsible for every possible use. In fact, early in my career I led a coalition fighting for this principle in Congress and coordinating amici briefs in the landmark Sony Corp. of America v. Universal City Studios, Inc. (1984) case, more commonly known as the "Betamax decision."
The Supreme Court held the VCR inventor did not violate copyright laws even though consumers can use a VCR to record an entire broadcast TV show. This single decision established that if a product has significant non-infringing uses as well as potentially illegal uses it should remain available to the public. It paved the way for VCRs, the home video market, camcorders, the Internet, content platforms, AI, and a range of new products and platforms.
Thirty years ago, Congress applied this same thinking to the online world. Section 230 of the Communications Decency Act, often called the “26 words that created the internet,” allowed platforms like Airbnb, Amazon, Glassdoor, Nextdoor, Travelocity, Turo, and many others to host customer-generated comments or reviews without taking on huge new legal risks. It allowed platforms to thrive, especially smaller platforms and startup ventures, and created distinct communities. Spending time on LinkedIn feels different than X or Instagram, and Yelp doesn’t look like YouTube. That’s a good thing!
This legacy matters even more in the AI age.
AI tools are built on top of the modern internet. They train on large amounts of data, often from public sources. Users upload 34 million videos to TikTok daily. On X, that number is around 500 million. Google fields millions of reviews. Without Section 230-style protections, every AI system that interacts with user-generated or third-party content to summarize, recommend, or generate new content becomes a lawsuit magnet.
The alternative is stultifying, pushing us towards either sterile, overly sanitized AI platforms or chaotic free-for-alls with limited safeguards in place. Neither of those are good solutions for the millions of Americans who regularly use generative AI platforms to shop, plan travel, research new topics, and more.
Lawmakers on Capitol Hill are grappling with these issues as they work to boost American AI innovation and consider the future of Section 230. In fact, the Senate Commerce Committee held a hearing last month on Section 230 protections.
Skepticism from lawmakers during that hearing concerned me. If we move away from the core principle that users are responsible for their own speech, we risk creating a system where any service that touches user content faces constant legal risk.
As Stanford Law professor Daphne Keller said during the Congressional hearing, without Section 230 protections platforms would face “years or decades of legal uncertainty and litigation expense,” something larger platforms might withstand but smaller ones often can’t. That would mean fewer choices, more limits on speech, and slower progress to develop and launch life-changing AI innovations.
Some of these debates feel less like true policy discussions and more like fearmongering around "big tech." Old media push government intervention mandating AM radio, requiring consumers buy costly TV tuners and creating ways to sue innovators out of existence.
Despite their efforts, consumers love new tech services and our nation leads the world in many tech categories. And while concerns about children’s safety and online fraud can and should be addressed, Congress should be skeptical of efforts by trial lawyers and old school media to manipulate Congress and kneecap innovative American companies.
It’s also unnecessary. If something is illegal in real life—child exploitation, drug trafficking, fraud—it is also illegal online. That remains true in an AI-world. Major platforms already invest heavily in detecting, removing, and when needed reporting illicit content, and Section 230 does not affect the legal authority to investigate or prosecute crimes. Weakening Section 230 will do little to help their vital work, but it will enrich trial lawyers, undermine online speech, and hurt digital innovation and investment.
National policymakers can and should focus on questions around uniform regulation of AI and algorithms. They should toughen enforcement of online crimes using existing law and embrace solutions like the U.S. Cyber Trust that give consumers information and agency. The worst possible move would be to tear down the legal tent pole that has supported U.S. innovation for three decades.
The Supreme Court rulings and Section 230 are a bulwark win for all those who support American leadership in the internet, startups, and AI innovation. Congress is right to question and set national guardrails for new innovation, but we should be careful of following Europe's lead of crushing rules leading to scant innovation and economic stagnation. The U.S. is great because we always welcome better tools and innovation. We must honor our culture and tradition of innovation and follow the principles established by the Supreme Court.
Gary Shapiro is an acclaimed author, lobbyist, and Executive Chair and CEO of the Consumer Technology Association (CTA) which represents over 1300 consumer technology companies and owns and produces CES — The Global Stage for Innovation. This Expert Opinion is exclusive to Broadband Breakfast.
Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.
Member discussion