Free Speech
It’s absolutely stupid just how often we’ve had to write about issues surrounding license plates. For convoluted reasons that involve how plates, which are mandated on all cars by states, are government property, that means that a state disallowing a vanity plate therefore does not violate the First Amendment. There are caveats to that that have been explored by federal courts, while other plate-holders have won in the lower courts. And, yet, the disputes over what constitutes a “vulgar” vanity plate continue unabated.
For instance, here is a gentleman with his lovely children posing in front of the vanity plate that the state of Maine approved for him.
That man is Peter Starostecki. Now, what came to mind when you saw his license plate? If your answer is that the plate is vulgar because it reads “Love To F. U.” the, first, get your damned mind out of the gutter and, second, you are a candidate for employment at the Maine BMV.
Starostecki says he got a letter in January from the BMV saying he had to get rid of his “LUVTOFU” plates. “I’m just a vegan. No ill intentions,” Starostecki said.
Like the title of the post says, this dude just likes him some tofu. You can tell as much by some of the vegan-related bumper stickers plastered on either side of the plate itself. When Starostecki responded to the BMV by attending an online hearing to appeal the decision, he encountered government bureaucrats doing their thing.
“From the beginning it felt like they sort of had their minds made up,” Starostecki said.
His appeal was rejected because the plates have to be looked at without context.
Except the gutter-mind context of those reviewing the plates, which were initially approved, it seems.
All of this is very silly. And, frankly, the puritanical viewpoint of the BMV certainly could be resulting in a First Amendment violation.
Unfortunately that is a question that will go unanswered for now. Starostecki has dropped the matter at this point and no longer has a vanity plate at all.
Filed Under: free speech, license plate, luvtofu, maine, maine bmv, peter starostecki, tofu, vanity license plate, vanity plates
Failures
Elon Musk’s next big revenue bet is that companies really, really, really want to show up as “verified.” All evidence suggests that very few Twitter users are interested in paying Elon $8/month to constantly break the site or engage in ego-driven experiments that make the general experience worse.
A few weeks ago, we found out that he’s trying to get organizations to pay $42,000 a month to access the Twitter API, and maybe that was just a framing technique. Because Twitter has announced the next round of its check mark program, which begins with deleting the “legacy” checkmark holders (which, honestly, to many of us is a huge relief), but also telling businesses and organizations they need to pay $1,000/month if they want to keep their checkmark.
The page for “Twitter Verified Organizations” says (laughably) that they’re “creating the most trusted place on the internet for organizations to reach their followers.” Which is kinda hilarious that anyone believes that. And, apparently, the way to create “the most trusted place” is to make sure that no users know whether or not organizations are legit or not unless they’re willing to pay through the nose.
In the US, it’s a flat rate, $1,000 per month, with a $50/month additional fee for each “affiliate seat subscription.”
That “affiliate” seat subscription” appears to be for employees that work for the company who are promoting it:
The best marketing comes directly from real people on Twitter. Now, you can affiliate your organization’s champions so that everyone knows where they work. Affiliates receive a small image of their organization’s Twitter account profile picture next to their name every time they Tweet, send a DM, or appear in search.
You can affiliate anyone who represents or is associated with your organization: leadership, product managers, employees, politicians, customer support, franchises, sub-brands, products and so on. An account you invite to affiliate must accept your invitation.
I’m sure some sucker companies are going to pay up, but this is going to get expensive very fast for any small or medium-sized business, so why bother? And, yes, this is all flat rate pricing, so giant consumer packaged goods companies may be willing to pay, but non-profits? Small businesses? Governments? It applies to all of them:
Twitter Verified Organizations enables organizations of all types–businesses, non-profits, and government institutions–to sign up and manage their verification and to affiliate and verify any related account.
In some ways, this is just Musk making a bet on extortion. Organizations and governments that don’t pay will be much more likely to get impersonated on Twitter and risk serious problems. So Musk is basically betting on making life so bad for organizations that they’ll have to pay these ridiculous rates to avoid people impersonating them.
I’m not sure how that creates “the most trusted place on the internet,” but then again, I didn’t set $44 billion on fire to fuck up a website I didn’t understand.
Filed Under: extortion, non-profits, organizations, trust, verified
Companies: twitter
(Mis)Uses of Technology
We’ve noted how agricultural machinery giants like John Deere have spent several years waging war on independent tractor repair shops in a bid to monopolize maintenance and drive up costs. We’ve also noted that every time industry promises to stop doing this, it turns out they’re largely full of shit.
With John Deere now facing increased action on “right to repair” reform in Congress, at the DOJ, and in numerous states, the company has been trying to pre-empt reform by striking silly, pointless memorandums of understanding with key agricultural groups.
For example last January, John Deere struck such a deal with the American Farm Bureau Federation, claiming it would do a better job of making repair manuals and parts available to independent repair shops and farmers, if the Farm Bureau agreed to never support right to repair legislative reform.
The problem: the agreement wasn’t actually binding, pre-empted real reform with real penalties, and John Deere already had a long history of empty promises on this front.
This week, the American Farm Bureau Federation struck another similar memorandum of understanding, but this time with another agricultural giant with a history of attempting to monopolize repair: CNH Industrial. It’s effectively the same as the John Deere deal; CNH Industrial pinky swears that it will try a little bit, in exchange for the AFBF agreeing to not support meaningful legislative reform.
Right to repair activists at organizations like PIRG aren’t particularly impressed:
Our key criticism of the Deere MOU was that it did not provide farmers with reasonable paths to recourse should the manufacturer deny them repair materials. And the manufacturer could walk away from the agreement with a mere 30 days’ notice.
Both of these are true for the CNH Industrial MOU as well. As a result, farmers are at real risk of being left out in the cold without what they need to fix equipment they spend up to $800,000 for. That’s not an acceptable outcome.
Industry giants aren’t going to meaningfully adhere to voluntary pinky swear agreements. They’re simply trying to delay the inevitable implementation of state and federal right to repair guidelines with actual teeth. Organizations claiming to represent constituents and keen on real reform probably wouldn’t be letting their organizations be used as props toward that end.
Filed Under: consumer rights, reform, right to repair, right to repair legislation, tractors
Companies: american farm bureau, cnh, john deere
Failures
On Thursday, Utah’s governor Spencer Cox officially signed into law two bills that seek to “protect the children” on the internet. He did with a signing ceremony that he chose to stream on nearly every social media platform, despite his assertions that those platforms are problematic.
Yes, yes, watch live on the platforms that your children shouldn’t use, lest they learn that their governor eagerly supports blatantly unconstitutional bills that suppress the free speech rights of children, destroy their privacy, and put them at risk… all while claiming he’s doing this to “protect them.”
The decision to sign the bills is not a surprise. We talked about the bills earlier this year, noting just how obviously unconstitutional they are, and how much damage they’d do to the internet.
The bills (SB 152 and HB 311) do a few different things, each of which is problematic in its own special way:
Leaving aside the fun of banning data collection while requiring age verification (which requires data collection), the bill is just pure 100% nanny state nonsense.
Children have their own 1st Amendment rights, which this bill ignores. It assumes that teenagers have a good relationship with their parents. Hell, it assumes that parents have any relationship with their kids, and makes no provisions for how to handle cases where parents are not around, have different names, are divorced, etc.
Also, the lack of data collection combined with the requirement to prevent addiction creates a uniquely ridiculous scenario in which these companies have to make sure they don’t provide features and information that might lead to “addiction,” but can’t monitor what’s happening on those accounts, because it might violate the data collection restrictions.
As far as I can tell, the bill both requires social media companies to hide dangerous or problematic content from children, and blocks their ability to do so.
Because Utah’s politicians have no clue what they’re doing.
Meanwhile, Governor Cox seems almost gleeful about just how unconstitutional his bill is. After 1st Amendment/free speech lawyer Ari Cohn laid out the many Constitutional problems with the bill, Cox responded with a Twitter fight, by saying he looked forward to seeing Cohn in court.
Perhaps Utah’s legislature should be banning itself from social media, given how badly they misunderstand basically everything about it. They could use that time to study up on the 1st Amendment, because they need to. Badly.
Anyway, in a de facto admission that these laws are half-baked at best, they don’t go into effect until March of 2024, a full year, as they seem to recognize that to avoid getting completely pantsed in court, they might need to amend them. But they are going to get pantsed in court, because I can almost guarantee there will be constitutional challenges to these bills before long.
Filed Under: 1st amendment, age verification, children, parental permission, parents, privacy, protect the children, social media, social media bans, spencer cox, utah
Content Moderation
Back in August 2020, the Trump White House issued an executive order purporting to ban TikTok, citing national security concerns. The ban ultimately went nowhere — but not before TikTok and Oracle cobbled together “Project Texas” as an attempt to appease regulators’ privacy worries and keep TikTok available in the United States.
The basic gist of Project Texas, Lawfare reported earlier this year, is that TikTok will stand up a new US-based subsidiary named TikTok US Data Security (USDS) to house business functions that touch US user data, or which could be sensitive from a national security perspective (like content moderation functions impacting Americans). Along with giving the government the right to conduct background checks on potential USDS hires (and block those hires from happening!), TikTok committed as part of Project Texas to host all US-based traffic on Oracle-managed servers, with strict and audited limits on how US data could travel to non-US-based parts of the company’s infrastructure. Needless to say, Oracle stands to make a considerable amount of money from the whole arrangement.
Yesterday’s appearance by TikTok CEO Shou Zi Chew before the House Energy and Commerce Committee shows that even those steps, and the $1.5 billion TikTok are reported to have spent standing up USDS, may prove to be inadequate to stave off the pitchfork mob calling for TikTok’s expulsion from the US. The chair of the committee, Representative Cathy Rodgers of Washington, didn’t mince words in her opening statement, telling Chew, “Your platform should be banned.”
Even as I believe at least some of the single-minded focus on TikTok is a moral panic driven by xenophobia, not hard evidence, I share many of the national security concerns raised about the app.
Chief among these concerns is the risk of exfiltration of user data to China — which definitely happened with TikTok, and is definitely a strategy the Chinese government has employed before with other American social networking apps, like Grindr. Espionage is by no means a risk unique to TikTok; but the trove of data underlying the app’s uncannily prescient recommendation algorithm, coupled with persistent ambiguities about ByteDance’s relationship with Chinese government officials, pose a legitimate set of questions about how TikTok user data might be used to surveil or extort Americans.
But there’s also the more subtle question of how an app’s owners can influence what people do or don’t see, and which narratives on which issues are or aren’t permitted to bubble to the top of the For You page. Earlier this year, Forbes reported the existence of a “heating” function available to TikTok staff to boost the visibility of content; what’s to stop this feature from being used to put a thumb on the scale of content ranking to favor Chinese government viewpoints on, say, Taiwanese sovereignty? Chew was relatively unambiguous on this point during the hearing, asserting that the platform does not promote content at the request of the Chinese government, but the opacity of the For You page makes it hard to know with certainty why featured content lands (or doesn’t land) in front of viewers.
Whether you take Chew’s word for it that TikTok hasn’t done any of the nefarious things members of Congress think it has — and it’s safe to say that members of the Energy and Commerce Committee did not take him at his word — the security concerns stemming from the possibility of TikTok’s deployment as a tool of Chinese foreign policy are at least somewhat grounded in reality. The problem is that solutions like Project Texas, and a single-minded focus on China, may end up having the counterproductive result of making the app less resilient to malign influence campaigns targeting the service’s 1.5 billion users around the world.
A key part of how companies, TikTok included, expose and disrupt coordinated manipulation is by aggregating an enormous amount of data about users and their behavior, and looking for anomalies. In infosec jargon, we call this “centralized telemetry” — a single line of sight into complex technical systems that enables analysts to find a needle (for instance, a Russian troll farm) in the haystack of social media activity. Centralized telemetry is incredibly important when you’re dealing with adversarial issues, because the threat actors you’re trying to find usually aren’t stupid enough to leave a wide trail of evidence pointing back to them.
Here’s a specific example of how this works:
In September 2020, during the first presidential debate of the 2020 US elections, my team at Twitter found a bunch of Iranian accounts with an awful lot to say about Joe Biden and Donald Trump. I found the first few — I wish I was joking about this — by looking for Twitter accounts registered with phone numbers with Iran’s +98 country code that tweeted with hashtags like “#Debate2020.” Many were real Iranians, sharing their views on American politics; others were, well, this:
Yes, sometimes even government-sponsored trolling campaigns are this poorly done.
As we dug deeper into the Iranian campaign, we noticed that similar-looking accounts (including some using the same misspelled hashtags) were registered with phone numbers in the US and Europe rather than Iran, and were accessing Twitter through different proxy servers and VPNs located all over the world. Many of the accounts we uncovered looked, to Twitter’s systems, like they were based in Germany. It was only by comparing a broad set of signals that we were able to determine that these European accounts were actually Iranian in origin, and part of the same campaign.
Individually, the posts from these accounts didn’t clearly register as being part of a state-backed influence operation. They might be stupid, offensive, or even violent — but content alone couldn’t expose them. Centralized telemetry helped us figure out that they were part of an Iranian government campaign.
Let’s turn back to TikTok, though:
TikTok… do a lot of this work right now, too! They’ve hired a lot of very smart people to work on coordinated manipulation, fake engagement, and what they call “covert influence operations” — and they’re doing a pretty good job! There’s a ton of data about their efforts in TikTok’s (also quite good!) transparency report. Did you know TikTok blocks an average of 1.8 billion fake likes per month? (That’s a lot!) Or that they remove an average of more than half a million fake accounts a day? (That’s also a lot!) And to their credit, TikTok’s state-affiliated media labels appear on outlets based in China. TikTok have said for years that they invest heavily in addressing manipulation and foreign interference in elections — and their own data shows that that’s generally true.
Now, you can ask very reasonable questions about whether TikTok’s highly capable threat investigators would expose a PRC-backed covert influence operation if they found one — the way Twitter and Facebook did with a campaign associated with the US Department of Defense in 2022. I personally find it a little… fishy… that the company’s Q3 2022 transparency report discloses a Taiwanese operation, but not, say, the TikTok incarnation of the unimaginably prolific, persistent, and platform-agnostic Chinese influence campaign Spamouflage Dragon (which Twitter first attributed to the Chinese government in 2019, and which continues to bounce around every major social media platform).
But anyway: the basic problem with Project Texas and the whole “we’re going to air-gap US user data from everything else” premise is that you’re establishing geographic limits around a problem that does not respect geography — and doing so meaningfully hinders the company’s ability to find and shut down the very threats of malign interference that regulators are so worried about.
Let’s assume that USDS staff have a mandate to go after foreign influence campaigns targeting US users. The siloed nature of USDS means they likely can only do that work using data about the 150 million or so US-based users of TikTok, a 10% view of the overall landscape of activity from TikTok’s 1.5 billion global users. Their ability to track persistent threat actors as they move across accounts, phone numbers, VPNs, and hosting providers will be constrained by the artificial borders of Project Texas.
(Or, alternatively, do USDS employees have unlimited access to TikTok’s global data, but not vice versa? How does that work under GDPR? The details of Project Texas remain a little unclear on this point.)
As for the non-USDS parts of TikTok, otherwise known as “the overwhelming majority of the platform,” USDS turns any US-based accounts into a data void. TikTok’s existing threat hunting team will be willfully blind to bad actors who host their content in the US — which, not for nothing, they definitely will do as a strategy for exploiting this convoluted arrangement.
USDS may seem like a great solution if your goal is not to get banned in the US (although yesterday’s hearing suggests that it may actually be a failure when it comes to that, too). But it’s a terrible solution if your goal is to let threat investigators find the bad actors actually targeting the people on your platform. Adversarial threats don’t respect geographic limits; they seek out the lowest-friction, lowest-risk ways to carry out their objectives. Project Texas raises the barriers for TikTok to find and disrupt inauthentic behavior, and makes it less likely that the company’s staff will be successful in their efforts to find and shut down these campaigns. I struggle to believe the illusory benefits of a US-based data warehouse exceed the practical costs the company necessarily takes on with this arrangement.
At the end of the day, Project Texas’s side effects are another example of the privacy vs security tradeoffs that come up again and again in the counter-influence operations space. This work just isn’t possible to do without massive troves of incredibly privacy-sensitive user data and logs. Those same logs become a liability in the event of a data breach or, say, a rogue employee looking to exfiltrate information about activists to a repressive government trying to hunt them down. It’s a hard problem for any company to solve — much less one doing so under the gun of an impending ban, like TikTok have had to.
But, whatever your anxieties about TikTok (and I have many!), banning it, and the haphazard Project Texas reaction to a possible ban, won’t necessarily help national security, and could make things worse. In an effort to stave off Chinese surveillance and influence on American politics, Project Texas might just open the door for a bunch of other countries to be more effective in doing so instead.
Yoel Roth is a technology policy fellow at UC Berkeley, and was the head of Trust & Safety at Twitter.
Filed Under: centralized telemetry, china, content moderation, coordinated manipulation, fake engagement, foreign influence operations, national security, project texas, threat assessment, trust & safety
Companies: oracle, tiktok, tiktok usds
Deals
The ChatGPT By OpenAI Training Bundle has four courses to introduce you to ChatGPT. You will learn the fundamentals of working with ChatGPT, a state-of-the-art language model developed by OpenAI. You’ll gain hands-on experience using ChatGPT to generate text that is coherent and natural, and you will explore the many possibilities for using this tool in a variety of applications. The bundle is on sale for $19.97.
Note: The Techdirt Deals Store is powered and curated by StackCommerce. A portion of all sales from Techdirt Deals helps support Techdirt. The products featured do not reflect endorsements by our editorial team.
Filed Under: daily deal
Stupidity
As soon as it was announced, we warned that the new “Select Subcommittee on the Weaponization of the Federal Government,” (which Kevin McCarthy agreed to support to convince some Republicans to support his speakership bid) was going to be not just a clown show, but one that would, itself, be weaponized to suppress speech (the very thing it claimed it would be “investigating.”)
To date, the subcommittee, led by Jim Jordan, has lived down to its expectations, hosting nonsense hearings in which Republicans on the subcommittee accidentally destroy their own talking points and reveal themselves to be laughably clueless.
Anyway, it’s now gone up a notch beyond just performative beclowing to active maliciousness.
This week, Jordan sent information requests to Stanford University, the University of Washington, Clemson University and the German Marshall Fund, demanding they reveal a bunch of internal information, that serves no purpose other than to intimidate and suppress speech. You know, the very thing that Jim Jordan pretends his committee is “investigating.”
House Republicans have sent letters to at least three universities and a think tank requesting a broad range of documents related to what it says are the institutions’ contributions to the Biden administration’s “censorship regime.”
As we were just discussing, the subcommittee seems taken in by Matt Taibbi’s analysis of what he’s seen in the Twitter files, despite nearly every one of his “reports” on them containing glaring, ridiculous factual errors that a high school newspaper reporter would likely catch. I mean, here he claims that the “Disinformation Governance Board” (an operation we mocked for the abject failure of the administration in how it rolled out an idea it never adequately explained) was somehow “replaced” by Stanford University’s Election Integrity Project.
Except the Disinformation Governance Board was announced, and then disbanded, in April and May of 2022. The Election Integrity Partnership was very, very publicly announced in July of 2020. Now, I might not be as decorated a journalist as Matt Taibbi, but I can count on my fingers to realize that 2022 comes after 2020.
Look, I know that time has no meaning since the pandemic began. And that journalists sometimes make mistakes (we all do!), but time is, you know, not that complicated. Unless you’re so bought into the story you want to tell you just misunderstand basically every last detail.
The problem, though, goes beyond just getting simple facts wrong (and the list of simple facts that Taibbi gets wrong is incredibly long). It’s that he gets the less simple, more nuanced facts, even more wrong. Taibbi still can’t seem to wrap his head around the idea that this is how free speech and the marketplace of ideas actually works. Private companies get to decide the rules for how anyone gets to use their platform. Other people get to express their opinions on how those rules are written and enforced.
As we keep noting, the big revelations so far (if you read the actual documents in the Twitter Files, and not Taibbi’s bizarrely disconnected-from-what-he’s-commenting-on commentary), is that Twitter’s Trust and Safety team was… surprisingly (almost boringly) competent. I expected way more awful things to come out in the Twitter Files. I expected dirt. Awful dirt. Embarrassing dirt. Because every company of any significant size has that. They do stupid things for stupid fucking reasons, and bend over backwards to please certain constituents.
But… outside of a few tiny dumb decisions, Twitter’s team has seemed… remarkably competent. They put in place rules. If people bent the rules, they debated how to handle it. They sometimes made mistakes, but seemed to have careful, logical debates over how to handle those things. They did hear from outside parties, including academic researchers, NGOs, and government folks, but they seemed quite likely to mock/ignore those who were full of shit (in a manner that pretty much any internal group would do). It’s shockingly normal.
I’ve spent years talking to insiders working on trust and safety teams at big, medium, and small companies. And, nothing that’s come out is even remotely surprising, except maybe how utterly non-controversial Twitter’s handling of these things was. There’s literally less to comment on then I expected. Nearly every other company would have a lot more dirt.
Still, Jordan and friends seem driven by the same motivation as Taibbi, and they’re willing to do exactly the things that they claim they’re trying to stop: using the power of the government to send threatening intimidation letters that are clearly designed to chill academic inquiry into the flow of information across the internet.
By demanding that these academic institutions turn over all sorts of documents and private communications, Jordan must know that he’s effectively chilling the speech of not just them, but any academic institution or civil society organization that wants to study how false information (sometimes deliberately pushed by political allies of Jim Jordan) flow across the internet.
It’s almost (almost!) as if Jordan wants to use the power of his position as the head of this subcommittee… to create a stifling, speech-suppressing, chilling effect on academic researchers engaged in a well-established field of study.
Can’t wait to read Matt Taibbi’s report on this sort of chilling abuse by the federal government. It’ll be a real banger, I’m sure. I just hope he uses some of the new Substack revenue he’s made from an increase in subscribers to hire a fact checker who knows how linear time works.
Filed Under: academic research, chilling effects, congress, intimidation, jim jordan, matt taibbi, nonsense peddlers, research, twitter files, weaponization subcommittee
Companies: clemson university, german marshall fund, stanford, twitter, university of washington
Policy
Earlier this month we noted how a successful, often homophobic smear campaign scuttled the nomination of popular reformer Gigi Sohn to the FCC. The GOP and telecom sector, as usual, worked in close collaboration to spread all manner of lies about Sohn, including claims she was an unhinged radical that hated Hispanics, cops, puppies, and freedom.
But there was no shortage of blame to be had on the Democratic side as well.
Unlike the shock and awe Khan nomination and promotion, the Biden administration waited nine months to even nominate Sohn, giving industry ample runway to create its ultimately successful campaign. Maria Cantwell buckled to repeated GOP requests for unnecessary show hearings, used to push false claims about Sohn that the industry had seeded in the press via various nonprofits. Chuck Schumer failed to whip up votes. Senators Masto, Kelly, and Manchin all buckled to industry fear campaigns, preventing a swift 51 vote Senate confirmation.
And nobody in the Biden administration thought it was particularly important to provide any meaningful public messaging support as Sohn faced down a relentless, industry smear campaign, alone. The entire process from beginning to end was a hot, incompetent mess.
And from every indication, there’s also no real evidence that the Biden administration had a plan B in the wake of Sohn’s nomination falling apart. Weeks after the fact and the White House still hasn’t pulled Sohn’s name from consideration or proposed a replacement candidate:
In a note to financial types, former top FCC official Blair Levin, now a media analyst, said a growing list of people apparently interested in the open seat suggests it would be a fair assumption that the White House did not have a plan B. “It may be sometime before it selects a new nominee, further delaying the moment when the Democrats obtain an FCC majority,” Levin said.
Given this was the industry’s entire goal, I’m sure they’re pleased.
AT&T, Verizon, Comcast, and News Corporation want to keep the nation’s top telecom and media regulator gridlocked at 2-2 commissioners, so it can’t take action on any issues deemed too controversial to industry, whether that’s restoring net neutrality, forcing broadband providers to be more transparent about pricing, or restoring media consolidation limits stripped away during the Trump era.
Whoever is chosen to replace Sohn will surely be more friendly to industry in a bid to avoid a repeat. Assuming that person is even seated, it won’t be until much later this year, at which point they’ll have very little time to implement any real reform before the next presidential election. The policies that will be prioritized probably won’t be the controversial or popular ones, like net neutrality.
I’m not sure who has been giving Biden broadband policy advice of late, but it’s pretty clear that for all of the administration’s talk about “antitrust reform” (which has included some great work on “right to repair”), a functioning FCC with a voting majority and competent broadband consumer protection has never actually been much of a priority.
Yes, the Biden administration has done good work on pushing for an infrastructure bill that will soon throw $45 billion at industry to address the digital divide. But most of that money will be going, as usual, to entrenched local monopolies that helped create the divide in the first place through relentless efforts to crush competition and stifle nearly all competent oversight.
And without a voting majority, the agency will have a steeper uphill climb when it comes to basic things like shoring up broadband mapping, or holding big ISPs accountable should they provide false broadband coverage data to the FCC. The current FCC says all the right things about that pesky “digital divide,” but its leaders are generally terrified to even mention that telecom monopolies exist, much less propose any meaningful strategy to undermine their power.
It seems clear that Biden’s advisors don’t really think the FCC’s role as consumer watchdog is important (they’ve shoveled a lot of the heavy lifting to the NTIA), or didn’t think fighting over the FCC’s consumer protection authority was worthwhile. And given the current DC myopic policy focus on Big Tech, having a functioning media and telecom regulator willing and able to hold the nation’s hugely unpopular telecom monopolies accountable has easily just… fallen to the cutting room floor.
There’s a reason that Americans pay some of the most expensive prices for mediocre broadband in the developed world. There’s a reason that 83 million Americans live under a broadband monopoly. And it’s in no small part thanks to a feckless, captured FCC, and politicians who don’t have the backbone to stand up to major campaign contributors bone-grafted to our intelligence gathering apparatus.
Filed Under: 5g, antitrust reform, biden, broadband, digital divide, fcc, fiber, gigabit, gigi sohn, high speed internet, jessica rosenworcel, monopoly, regulatory capture, telecom
Read the latest posts:
Read All »
Become an Insider!
This feature is only available to registered users.
You can register here or sign in to use it.