Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
|
The Capitol Hill hearings featuring TikTok’s CEO lead off episode 450 of the Cyberlaw Podcast. The CEO handled the endless stream of Congressional accusations and suspicion about as well as could have been expected. And it did him as little good as a cynic would have expected. Jim Dempsey and Mark MacCarthy think Congress is moving toward action on Chinese IT products – probably in the form of the bipartisan Restricting the Emergence of Security Threats that Risk Information and Communications Technology (RESTRICT) Act. But passing legislation and actually doing something about China’s IT successes are two very different things.
The FTC is jumping into the policy arena on cloud services, Mark tells us, and it can’t escape its DNA; it’s dwelling on possible industry concentration and lock-in and not asking much about the national security implications of knocking off a bunch of American cloud providers when the alternatives are largely Chinese cloud providers. The FTC’s myopia means that the administration won’t get as much help as it could from the FTC on cloud security measures. I reissue my standard objection to the FTC’s refusal to follow the FCC’s lead in deferring on national security to executive branch concerns. Mark and I disagree about whether the FTC Act requires the Commission to limit itself to consumer protection.
Jim Dempsey reviews the latest AI releases, including Google’s Bard, which seems to have many of the same hallucination problems as OpenAI’s. Jim and I debate what I consider the wacky and unjustified fascination in the press with catching AI engaging in wrongthink. I believe it’s just a mechanism for justifying the imposition of left-wing values on AI output – which already scores left/libertarian on 14 of 15 standard tests for identifying ideological affiliation. Similarly, I question the effort to stop AI from hallucinating footnotes in support of its erroneous facts. If ever there were a case for a separate AI citechecker, for generative AI correction of AI errors, the fake citation problem seems like a natural.
Speaking of Silicon Valley’s lying problem, Mark reminds us that social media is absolutely immune for false user speech, even after it gets notice that the speech is harmful and false. He reminds us of his thoughtful argument in favor of tweaking section 230 to more closely resemble the notice and action obligations found in the Digital Millennium Copyright Act (DMCA). I argue that the DMCA has not so much solved the incentives for overcensoring speech as it has surrendered to them.
Jim introduces us to an emerging trend in state privacy law: privacy bills that industry supports. Iowa’s new law is the exemplar; Jim questions whether it will satisfy users in the long run.
I summarize Hachette v. Internet Archive, in which Judge John G. Koeltl delivers a harsh rebuke to internet hippies everywhere, ruling that the Internet Archive violated copyright in its effort to create a digital equivalent to public library lending. The judge’s lesson for the rest of us: You might think fair use is a thing, but it’s not. Get over it.
In quick hits,
Download 450th Episode (mp3)
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
To get the Volokh Conspiracy Daily e-mail, please sign up here.
Are AI Program Outputs Reasonably Perceived as Factual? A Response to Eugene
practices law in Washington DC.
Show Comments (1)
|
|
|
|
|
© 2022 Reason Foundation | Accessibility | Privacy Policy | Terms Of Use