Mike Masnick's Techdirt Profile

Mike Masnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of Floor64 and editor of the Techdirt blog.

He can be found on Bluesky at bsky.app/profile/mmasnick.bsky.social on Mastodon at mastodon.social/@mmasnick and still a little bit (but less and less) on Twitter at www.twitter.com/mmasnick

Posted on Techdirt - 29 July 2024 @ 09:26am

Instead Of Fact Checking MAGA, Democrats Have Moved On To Vibe Checking

If you’ve been paying any attention to the political news in the last week, you may have seen stories about couch fucking, or dolphin porn, or burnt monkey testicles, or even cat ladies. Or really just about any of the comments coming out of the Harris campaign, or from other Democratic supporters, calling out the fact that Donald Trump and running mate JD Vance are just fucking weird, man.

Image

In the last week, there’s been a notable difference in the way the Democrats have campaigned against the Trumpist GOP, since Joe Biden dropped out of the 2024 Presidential election. The Democrats have not focused that much on debunking any of the many, many (many) falsehoods the Trump/Vance campaign spews. They haven’t even focused as heavily on the extreme policies the campaign is pushing (though, that’s a part of it).

Instead, they’ve leaned in deeply on just how fucking weird Trump and Vance and their core beliefs are. And, in many ways, this seems to be generating excitement, at least online, from Democrats who were kinda blah about Biden and his chances.

Of course, this kind of campaign-by-meme and focusing on mockery over policy was embraced by Donald Trump going back nearly a decade. It took forever for the Democrats to figure out a way to counter it. Trump’s entire argument against Democrats has been to constantly mock them and never engage seriously on policy. And it’s worked.

For years now, so many have insisted that the best way to respond to the Gish Gallop of Donald Trump is to try to actually debunk his many falsehoods. The emphasis on “fact-checking” everything has been an obsession of the media, though Republicans have turned fact-checking against the media.

I wrote eight years ago that fact-checking is mostly useless in convincing voters. As I wrote then, fact-checking often seems to reinforce and entrench opinions, rather than change them. Yet, so many Democrats (and media folks) seemed to think the way to deal with the non-stop flood of falsehoods from Republicans is to counter them with facts and policy ideas.

And, of course, those things have their place. But they suck as the main strategy for getting voters interested.

I’m reminded of a conversation I had long ago with Susan Benesch from the Dangerous Speech Project. She has spent many years studying so-called “dangerous speech,” which is speech that leads to real harm, as well as ways to counteract it. She pointed out that one strategy that is effective in some cases is mockery/humor as counterspeech. That’s not to say it’s the only strategy, but it’s often a useful one.

And it’s one that hasn’t really been used that much in response to Trumpism. Until now.

There’s just something powerful about taking back control over the framing. The MAGA world has moved the Overton window so much on certain issues. Perhaps the best way to make people understand this is to just shine a light on how fucking weird their positions are, and how out of touch they are with what most people actually believe.

Who knows if it will be effective in the long run. I have no sense about the political viability of it all. But at a first pass, it seems like it’s done an impressive job in reframing the debate away from this idea that Trumpworld are plotting to destroy everything (which feels unbelievable) to just: get a fucking load of what these dumbasses believe, and how incredibly dorky they are.

Already, Republicans are freaking out about it and trying to get the Democrats to stop calling them out on things like this. Hilariously, Vivek Ramaswamy is whining about it being “juvenile” and demanding that we get back to debating “policy.”

Image

Dude, come on. None of us were born yesterday. The Trumpist GOP has been “dumb and juvenile” from the day Trump came down the escalator at Trump Tower and started raining down insults and condescending nicknames on all who disagreed with him. It’s all been insults and ad hominems.

I’m sure that there will be some adjustments and attempts to counter these arguments, but for the first time in eight years it feels like Democrats actually realized that the way to go after Trump is not to try to respond to all the nonsense, but to actually trust that people can recognize the nonsense.

It’s not about saying “and here are the reasons this is nonsense.” Instead, they are now saying “holy shit, did you see that same nonsense I did? I mean… really! Are these guys that wacko?”

It’s not a fact check. It’s a vibe check. Those dudes are spewing utter nonsense, and they seem kinda creepy and weird.

And so far, it’s working.

Posted on Techdirt - 26 July 2024 @ 12:10pm

Ctrl-Alt-Speech: Live At TrustCon 2024

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In the first ever live recording of Ctrl-Alt-Speech, Mike and Ben are joined at TrustCon 2024 by Dona Bellow, Platform Safety Policy at Reddit, and Alice Hunsberger, PartnerHero’s VP of Trust & Safety and Content Moderation, to round up the latest news in online speech, content moderation and internet regulation, including:

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and by our sponsor TaskUs, a leading company in the T&S field which provides a range of platform integrity and digital safety solutions. In our Bonus Chat at the end of the episode, also recorded live at TrustCon, Mike sits down with Rachel Guevara, TaskUs Division Vice President of Trust and Safety, to talk about her impressions of this year’s conference and her thoughts on the future of trust and safety.

You can also watch the video of the recording on our YouTube channel.

Posted on Techdirt - 26 July 2024 @ 09:32am

KOSA Will Come To The Senate Floor On Tuesday, Senators Paul & Wyden Explain Why It’s Still Bad

On Thursday, as expected, the Senate voted for “cloture” on the extremely problematic Kids Online Safety Act (KOSA). The cloture vote is a procedural vote necessary to bring a full vote to the floor. Previously, attempts to move KOSA forward “by unanimous consent” could be (and were) blocked by objections from at least one Senator (often Senator Wyden).

The cloture vote, in effect, overrides such a block, and moves to have a second vote on the floor. In this case, the cloture vote won, 86 to 1, meaning the real vote will happen on Tuesday. The one “nay” vote was from Senator Rand Paul. It took some by surprise, but Senator Wyden voted yes on cloture.

It’s been widely reported that Schumer has been negotiating with Wyden on some changes to try to deal with the larger concerns with KOSA. In the end, some small, but important changes were made to the bill at the behest of Wyden, including explicit text that nothing in KOSA overrides Section 230.

My purely speculative guess is that the basic deal was that with this minor change, Wyden would agree to vote in favor of cloture, but could still vote against the actual bill next week. Indeed, immediately after the cloture vote, Wyden put out a statement about why he could not support the bill:

“After months of negotiations, the Kids Online Safety Act (KOSA) has been improved, thanks to hard work by Commerce Chair Cantwell and Leader Schumer. The changes that I, LGBTQ+ advocates, parents, student activists, civil rights groups and others have fought for over the last two years have made it less likely that the bill can be used as a tool for MAGA extremists to wage war on legal and essential information to teens.  

“I thank all of the advocates, parents, young people and concerned citizens that have raised their views about KOSA with me, both in support of the bill and with concern about its implications. 

“I strongly support elements of this bill, especially Senator Markey’s Children and Teens’ Online Privacy Protection Act, which will safeguard the personal information of young people online. Provisions regulating addictive design elements used by platforms to keep young people hooked are valuable safeguards that will make tech products safer. 

“Unfortunately, KOSA’s improvements, while constructive, remain insufficient. I fear this bill could be used to sue services that offer privacy-enhancing technologies like encryption or anonymity features that are essential to young people’s ability to communicate securely and privately without being spied on by predators online. I also take seriously concerns voiced by the American Civil Liberties Union, Fight for the Future, and LGBTQ+ teens and advocates that a future MAGA administration could still use this bill to pressure companies to censor gay, trans and reproductive health information.

“For these reasons, I cannot vote for this legislation.  

“However, if this bill is signed into law by the President, I look forward to working with my colleagues to conduct rigorous oversight of the FTC to ensure that my worst fears about this bill do not come true and that kids benefit from a safer internet.  

“Whatever happens to this bill, I look forward to working with my colleagues on other initiatives, including regulating harmful and manipulative platform designs, to tackle the vital topic of kids’ safety online. I also remain convinced that this effort must go hand-in-hand with passing a strong baseline privacy law for all Americans.”

And, thus, the underlying and still fundamentally dangerous bill is slightly less dangerous and the “trade” to improve the bill was that Wyden would vote for cloture. And that vote was effectively meaningless, since the cloture threshold would have been easily met even if Wyden had voted no on cloture.

The one nay vote, Senator Paul, also sent a “Dear Colleague” letter to the other Senators, and it’s one of the clearest, most straightforward explanations of why KOSA is bad. The letter is written in a manner that both Democrats and Republicans should be able to understand (i.e., it doesn’t engage in partisan culture war nonsense, but just spits facts).

Dear Colleague:

This week, the Senate will consider S. 1409, the Kids Online Safety Act (KOSA). While the intent of this legislation is laudable, the bill raises significant First Amendment concerns, imposes vague, undefined requirements on internet platforms, and empowers politically motivated enforcers to advance their own ideological interests to the detriment of the American people. I will be voting against this bill, and I encourage you to do the same.

KOSA would impose an unprecedented “duty of care” on internet platforms to mitigate certain harms associated with mental health, such as anxiety, depression, and eating disorders. While proponents of the bill claim that it is not designed to regulate content, imposing a “duty of care” on online platforms to mitigate harms associated with mental health can only lead to one outcome: the stifling of First Amendment protected speech.

Should platforms stop children from seeing climate-related news because climate change is one of the leading sources of anxiety amongst younger generations? Should they stop children from seeing coverage of international conflicts because it could lead to depression? Should pro-life groups have their content censored because platforms worry that it could impact the mental well-being of teenage mothers? This bill opens the door to nearly limitless content regulation.

The bill contains a number of vague provisions and undefined terms. The text does not explain what it means for a platform to “prevent and mitigate” harm, nor does it define “addiction-like behaviors.” Additionally, the bill does not explicitly define the term “mental health disorder.” Instead, it references the Fifth Edition of the Diagnostic and Statistical Manual of Mental Health Disorders or “the most current successor edition.” As such, the definition could change without any input from Congress.

We do not impose these types of burdens on any other sector of the economy. For example, the bill seeks to protect minors from alcohol and gambling ads on certain online platforms. However, minors can turn on the TV to watch the Super Bowl or the PGA tour and see the exact same ads without any problem.

This bill is a Trojan Horse. It claims to protect our children, but in reality, it stifles free speech and deprives Americans of the numerous benefits created by the internet. Any genuine effort to protect children online must start at home. And if the government does decide to get involved, it must ensure that First Amendment rights are protected, and platforms have clear guidelines on how to comply with the law. This bill fails to do either.

I intended to vote against S. 1409 and encourage you to do the same.

Honestly, this is one of the most compelling arguments against KOSA that I’ve seen, so kudos to Senator Paul and his staff for writing it. The point about how kids can just turn on TV and see the exact same content is a pretty key argument.

Unfortunately, it’s unlikely to have even the slightest effect. KOSA has 70 cosponsors, all of whom want to get nonsense headlines in their local papers about how they voted to “protect the children” even as the bill will actually do real harm to children.

While the vote on Tuesday will be important, the real fight now moves to the House. It’s unclear if there’s consensus on moving on the bill there, and if so, in what form. The current House bill is different than the Senate one, so the two sides would have to agree on what version moves forward. The real answer should be neither, but it seems like the ship has sailed on the Senate version.

Still, kudos to Wyden and Paul for continuing to fight the good fight against a dangerous bill.

Posted on Techdirt - 25 July 2024 @ 01:43pm

FTC Oversteps Authority, Demands Unconstitutional Age Verification & Moderation Rules

Call me crazy, but I don’t think it’s okay to go beyond what the law allows even in pursuit of “good” intentions. It is consistently frustrating how this FTC continues to push the boundaries of its own authority, even when the underlying intentions may be good. The latest example is in its order against a sketchy messaging app, it has demanded things it’s not clear that it can order.

This has been a frustrating trend with this FTC. Making sure the market is competitive is a good thing, but bringing weak and misguided cases makes a mockery of its antitrust power. Getting rid of non-competes is a good thing, but the FTC doesn’t have the authority to do so.

Smacking down sketchy anonymous messaging apps preying on kids is also a good thing, but once again, the FTC seems to go too far. A few weeks ago, the FTC announced an order against NGL Labs, a very sketchy anonymous messaging app that was targeting kids and leading to bullying.

It certainly appears that the app was violating some COPPA rules on data collection for sites targeting kids. And it also appears that the app’s founders were publicly misrepresenting aspects of the app, as well as hiding that when they charged you, users were actually signing up for a weekly subscription. So I have no issues with the FTC going after the company for those things. Those are the kinds of actions the FTC should be taking.

The FTC’s description highlights at least some of the sketchiness behind the app:

After consumers downloaded the NGL app, they could share a link on their social media accounts urging their social media followers to respond to prompts such as “If you could change anything about me, what would it be?” Followers who clicked on this link were then taken to the NGL app, where they could write an anonymous message that would be sent to the consumer.

After failing to generate much interest in its app, NGL in 2022 began automatically sending consumers fake computer-generated messages that appeared to be from real people. When a consumer posted a prompt inviting anonymous messages, they would receive computer-generated fake messages such as “are you straight?” or “I know what you did.” NGL used fake, computer-generated messages like these or others—such as messages regarding stalking—in an effort to trick consumers into believing that their friends and social media contacts were engaging with them through the NGL App.

When a user would receive a reply to a prompt—whether it was from a real consumer or a fake message—consumers saw advertising encouraging them to buy the NGL Pro service to find out the identity of the sender. The complaint alleges, however, that consumers who signed up for the service, which cost as much as $9.99 a week, did not receive the name of the sender. Instead, paying users only received useless “hints” such as the time the message was sent, whether the sender had an Android or iPhone device, and the sender’s general location. NGL’s bait-and-switch tactic prompted many consumers to complain, which NGL executives laughed off, dismissing such users as “suckers.”

In addition, the complaint alleges that NGL violated the Restore Online Shoppers’ Confidence Act by failing to adequately disclose and obtain consumers’ consent for such recurring charges. Many users who signed up for NGL Pro were unaware that it was a recurring weekly charge, according to the complaint.

But just because the app was awful, the founders behind it were awful, and it seems clear they violated some laws, does not mean any and all remedies are open and appropriate.

And here, the FTC is pushing for some remedies that are likely unconstitutional. First off, it requires age verification and blocking all kids under the age of 18.

  • Required to implement a neutral age gate that prevents new and current users from accessing the app if they indicate that they are under 18 and to delete all personal information that is associated with the user of any messaging app unless the user indicates they are over 13 or NGL’s operators obtain parental consent to retain such data;

But, again, most courts have repeatedly made clear that government-mandated age verification or age-gating is unconstitutional on the internet. The Supreme Court just agreed to hear yet another case on this point, but it’s still a weird choice for the FTC to demand this here, knowing that the issue could end up before a hostile Supreme Court.

On top of that, as Elizabeth Nolan Brown points out at Reason, it appears that some of the other things the FTC are mad about regarding NGL is simply that offering anonymous communications tools to kids is somehow inherently harmful behavior that shouldn’t be allowed:

“The anonymity provided by the app can facilitate rampant cyberbullying among teens, causing untold harm to our young people,” Los Angeles District Attorney George Gascón said in a statement.

“NGL and its operators aggressively marketed its service to children and teens even though they were aware of the dangers of cyberbullying on anonymous messaging apps,” the FTC said.

Of course, plenty of apps allow for anonymity. That this has the potential to lead to bullying can’t be grounds for government action.

So, yes, I think the FTC can call out violating COPPA and take action based on that, but I don’t see how they can legitimately force the app to age gate at a time when multiple courts have already said the government cannot mandate such a thing. And they shouldn’t be able to claim that anonymity itself is somehow obviously problematic, especially at time when studies often suggest the opposite for some kids who need their privacy.

The other problematic bit is that the FTC is mad that NGL may have overstated their content moderation abilities. The FTC seems to think that it can legally punish the company for not living up to the FTC’s interpretation of NGL’s moderation promises. From the complaint itself:

Defendants represent to the public the NGL App is safe for children and teens to use because Defendants utilize “world class AI content moderation” including “deep learning and rule-based character pattern-matching algorithms” in order to “filter out harmful language and bullying.” Defendants further represent that they “can detect the semantic meaning of emojis, and [] pull[] specific examples of contextual emoji use” allowing them to “stay on trend, [] understand lingo, and [] know how to filter out harmful messages.”

In reality however, Defendants’ representations are not true. Harmful language and bullying, including through the use of emojis, are commonplace in the NGL App—a fact of which Defendants have been made aware through numerous complaints from users and their parents. Media outlets have reported on these issues as well. For example, one media outlet found in its testing of the NGL App that the App’s “language filters allowed messages with more routine bullying terms . . . including the phrases ‘You’re fat,’ ‘Everyone hates you,’ ‘You’re a loser’ and ‘You’re ugly.’” Another media outlet reported that it had found that “[t]hreatening messages with emojis that could be considered harmful like the knife and dagger icon were not blocked.” Defendants reviewed several of these media articles, yet have continued to represent that the NGL App is “safe” for children and teens to use given the “world class AI content moderation” that they allegedly employ.

I recognize that some people may be sympathetic to the FTC here. It definitely looks like NGL misrepresented the power of their moderation efforts. But there have been many efforts by governments or angry users to sue companies whenever they feel that they have not fully lived up to public marketing statements regarding their moderation.

People have sued companies like Facebook and Twitter for being banned, arguing that public statements about “free speech” by those companies meant that they shouldn’t have been banned. How is this not any different than that?

And the FTC’s claim here that “if you promise your app is safe” and then someone can find “harmful language and bullying” on the platform, that you’ve then violated the law just flies in the face of everything we just heard from the Supreme Court in the Moody case.

The FTC doesn’t get to be the final arbiter of whether or not a company successfully moderates away unsafe content. If so, it will be subject to widespread abuse. Just think of whichever presidential candidate you dislike the most, and what would happen if they could have their FTC investigate any platform they dislike for not fairly living up to their public promises on moderation.

It would be a dangerous, free speech-attacking mess.

Yes, NGL seems like a horrible company, run by horrible people. But go after them on the basics: the data collection in violation of COPPA and the sneaky subscription charging. Not things like age verification and content moderation.

Posted on Techdirt - 25 July 2024 @ 09:30am

Elon’s ExTwitter Engagement Stat Exaggeration: Outside Stats Paint A Bleaker Picture

Does anyone actually trust Elon to be honest about, well, anything? Last week he claimed that ExTwitter hit a new “all-time high” on engagement, with “417 billion user-seconds globally” and that in the US it was 93 billion “user-seconds.”

Image

First off, what the fuck are “user-seconds”? This is not a typical measure in the internet world. It’s also a potentially misleading one. The grand sum total of “user seconds” can be pretty misleading. Does it count people just seeing tweets in the wild? What counts as a “user second”? If someone just leaves a tab open does that continue to count? Are they actively engaging on the site? There are so many questions as to what that stat even means.

But, more importantly, as Media Post points out, Elon had announced numbers back in March that suggested even higher engagement data than what he claimed was this new “record” high. Of course, there appears to be some gamesmanship with the numbers, as the March numbers Media Post is discussing are per month, and Elon seems to be talking about a single (very newsworthy, right after the assassination attempt) day on ExTwitter:

Musk posted that X saw a cumulative “417 billion user-seconds globally” in one day — equating to 27.8 minutes per users — at 250 million daily active users, which does not align with X’s user reportage in March, when the company said users were spending 30 minutes per day with the app on average.

The company also claimed 8 billion total minutes in March, but 417 billion seconds only equates to 6.95 billion minutes, which either negates the “record high engagement” now or invalidates the previous numbers.

On July 15, Musk also posted that in the U.S., user seconds reached 93 billion — “23% higher than the previous record of 76B.” This equates to 15.5 minutes per user, on average, based on X’s previous reportage of 100 million U.S. users — a figure that is lower than expected.

The lack of standard and standardized reporting allows playing with the numbers to misrepresent how popular the site is.

Meanwhile, that same article also highlights how outside observers see little to no evidence of higher engagement on the site, and plenty of evidence of decline:

… a new report by data intelligence platform Tracer shows “significant drops” in user engagement and “drastic drops” in advertising unlike competitors like YouTube, Instagram and Pinterest.

In June, X advertising saw drops month-over-month and year-over-year, the report shows, with click-through-rates (CTRs) declining 78% month-over-month, which the report suggests reflects a sharp downturn in user activity. In addition, cost-per-thousand (CPMs) decreased 17% from May to June, suggesting that advertisers are also leaving X.

[….]

Comparatively, Instagram, Pinterest and YouTube have seen dramatic user engagement increases recently. Instagram’s CTRs surged by 89% over the past year, while YouTube and Pinterest saw increases of 77% and 385%, respectively. The success these platforms are seeing is likely a direct result of the introduction of new video-first launches, such as Instagram Reels, YouTube Shorts, and a host of new features on Pinterest.

Is it any wonder that the site is struggling, with Elon telling advertisers to go fuck themselves and then threatening to sue those advocating pulling ads from the site?

But really, when it comes down to details, does anyone believe Elon’s random “best day ever!” tweets to be trustworthy?

Posted on Techdirt - 24 July 2024 @ 09:26am

No, Elon Isn’t Blocking Kamala From Getting Followers, And Congress Shouldn’t Investigate

Gather ’round, children, and let me tell you a tale of rate limiting, misinterpreted screenshots, and how half the internet lost its mind over a pretty standard Twitter error. This error was then interpreted through an extremely partisan political prism, leading previous arguments to flip political sides based on who was involved.

The desire to attack editorial discretion knows no political bounds. Partisan attacks on free speech seem to flip the second the players switch.

I think it’s become pretty clear over the past couple of years that I’m no fan of how Elon Musk runs ExTwitter. He makes terrible decision after terrible decision. Indeed, he seems to have a knack for doing the wrong thing pretty consistently.

But this week there’s been a hubbub of anger and nonsense that I think is totally unfair to Musk and ExTwitter. Musk did come out in support of Donald Trump a couple weeks back and has gone quite far in making sure that everyone on the platform is bombarded with pro-Trump messages. I already called out the hypocrisy of GOP lawmakers who attacked the former management of Twitter for “bias” as they did way, way less than that.

But, as you might have heard, on Sunday Joe Biden dropped out of the Presidential race and effectively handed his spot over to Kamala Harris. The “@BidenHQ” account on ExTwitter was renamed and rebranded “@HarrisHQ.” Not surprisingly, a bunch of users on the site clicked to follow the account.

At some point on Monday, some people received a “rate limiting” error message, telling them that the user was “unable to follow more people at this time.”

Image

Lots of people quickly jumped to the conclusion that Musk was deliberately blocking people from following Harris. And, yes, I totally understand the instinct to believe that, but there’s little to suggest that’s actually what happened.

First off, rate limiting is a very frequently used tool in trust & safety efforts to try to stop certain types of bad behavior (often spamming). And it’s likely that ExTwitter has some sort of (probably shoddily done) rate limiting tool that kicks in if any particular account suddenly gets a flood of new followers.

Having an account — especially an older account that changes names — suddenly get a large flood of new followers is a pattern consistent with spam accounts (often a spammer will somehow take over an old account, change the name, and then flood it with bot followers). It’s likely that, to combat that, ExTwitter has systems that kick in after a certain point and rate limit the followers.

The message which blames the follower might just be shoddy programming on ExTwitter’s part. Or it might be because part of the “signal” found in this pattern is that when a ton of accounts follow an old account like this, it often means all those follower accounts are now being flagged as potential bots (again, spam accounts flood newly obtained accounts with bot followers).

In other words, these rate limiting messages are entirely consistent with normal trust & safety automated systems.

Of course, most users immediately assumed the worst. Many posted their screenshots and insisted it was Musk putting his thumb on the scales. The New Republic (which is usually better than this) rushed in with an article where at least the headline suggests Musk is doing this intentionally: “Trump-Lover Elon Musk Is Already Causing Kamala Harris Problems.”

Then, some site called The Daily Boulder (?!?) made it worse by misinterpreting a tweet by Musk as supposedly admitting to doing something. The Daily Boulder report is very misleading in multiple ways. First, it falsely states that users trying to follow Harris got a “something went wrong” error, when they actually got the rate limiting error shown above. The “something went wrong” error was from something else.

After the @BidenHQ account was changed to @HarrisHQ, if you tried to go directly to @BidenHQ, rather than redirect, Twitter just showed an error message saying “Something went wrong.” Elon screenshotted that and said “Sure did.”

Image

This is a joke. Musk is joking that “something went wrong” with Joe Biden and/or the Biden campaign. Not that something went wrong with anyone trying to follow the Harris campaign.

The Daily Boulder piece confused the two different error messages. It seemed to think (incorrectly) that the screenshot Musk posted was of the Harris campaign account when it was the Biden one (I get that this is a bit confusing because the Biden account became the Harris account, but they don’t “redirect” if you go straight to the old name).

Either way, tons of Harris supporters flipped out and insisted that Musk was up to no good and was interfering. And, as much as I think Musk would have no issue doing something, nothing in this suggests anything done deliberately (indeed, I’ve tried to follow/unfollow/refollow the HarrisHQ account multiple times since Monday with no problem).

Still, Democrat Jerry Nadler has already called for an investigation, making him no better than Jim Jordan. Tragically, that NBC article fails to link to Nadler’s actual letter, leaving me to do their work for them. Here it is.

The letter is addressed to Jim Jordan, asking him to investigate this issue. That’s because Jordan is the chair of the House Judiciary Committee. Nadler is the top Democrat on the committee but is effectively powerless without Jordan’s approval. The most charitable version of this is that Nadler is trolling Jordan, given all of Jordan’s hearings insisting that bias in the other direction was obviously illegal but his unwillingness to do so when bias is on the other foot.

Indeed, some of the letter directly calls out Jordan’s older statements when the accusations went in the other direction:

If true, such action would amount to egregious censorship based on political and viewpoint discrimination—issues that this Committee clearly has taken very seriously.

As you have aptly recognized in the past: “Big Tech’s role in shaping national and international public discourse today is well-known.” Against this import, you have criticized tech platforms for alleged political discrimination. As you wrote in letters to several “Big Tech” companies: “In some cases, Big Tech’s ‘heavy-handed censorship’ has been ‘use[d] to silence prominent voices’ and to ‘stifle views that disagree with the prevailing progressive consensus.’” In your view, platform censorship is particularly harmful to the American public because, “[b]y suppressing free speech and intentionally distorting public debate in the modern town square, ideas and policies were no longer fairly tested and debated on their merits.” Ironically, X’s CEO Elon Musk himself has expressed similar sentiment: “Given that Twitter serves as the de facto public town square, failing to adhere to free speech principles fundamentally undermines democracy.”

Given your long track record of fighting against political discrimination on the platform “town squares” of American discourse, I trust that you will join me in requesting additional information from X regarding this apparent censorship of a candidate for President of the United States. The Committee should immediately launch an investigation and request at a minimum the following information from X.

But still, even if you’re trolling, Congress shouldn’t be investigating any company for their editorial choices. The answer to this weaponization of the government should not be even more weaponization of the government.

Which brings us to the final point in all of this. Even if it were true that Musk were doing this deliberately (and, again, there is no evidence to support that), it would totally be within his and ExTwitter’s First Amendment rights to do so.

I understand this upsets some people, but if it upsets you, think back to how you felt when Twitter banned Donald Trump. If you’re mad about this, I’m guessing there’s a pretty high likelihood you supported that move, right? That was also protected by the First Amendment. Platforms have First Amendment rights over who they associate with and who they platform. Twitter could choose to remove President Trump. ExTwitter could choose to remove or block the Harris campaign.

That’s how freedom works.

And to answer one other point that I saw a few people raise, no, this also would not be an “in kind contribution” potentially violating election law. We already went through this a few years back when the GOP whined that Google was giving Democrats in-kind contributions by filtering more GOP fundraiser emails to spam (based on their own misreading of a study). Both the FEC and the courts pointed out that this was not an in-kind contribution and was not illegal. The court pointed out that such filtering is clearly protected under Section 230.

The same is true here.

It’s fine to point out that this is a dumb way to handle issues. Or that ExTwitter should have made sure that people could follow the newly dubbed HarrisHQ account. But I haven’t seen anything that looks out of the ordinary, and I think people’s willingness to leap to the worst possible explanation for anything Musk related has gone too far here.

But even worse is Nadler’s call for an investigation. Even if it was just to mock Jordan’s other investigations, there’s no reason to justify such nonsense with more nonsense.

Posted on Techdirt - 24 July 2024 @ 05:27am

Schumer Advances KOSA: Congress’s Latest ‘But Think Of The Children’ Crusade

Apparently the only time Congress can get together to agree to something, it’s to give whoever is President the power to censor speech online. That’s the only conclusion I can come to regarding the widespread support for KOSA (the Kids Online Safety Act), which Senator Chuck Schumer has announced will be coming to the floor for a vote.

Our elected officials have been told again and again why KOSA is a dangerous bill that will enable targeted censorship of protected speech. They continue to push it forward and insist that it would never be abused. And, yes, the “updated” version of KOSA from earlier this year is better than earlier versions of KOSA, but it’s still a censorship bill.

The bill still retains its “duty of care” section, which the FTC can enforce. It requires websites to “exercise reasonable care” in the design of features to avoid harm. But harm remains a risk, often through no fault of any particular platform. We constantly see websites blamed for problematic decisions made by users. But users are always going to make problematic decisions, and under KOSA, whoever is in charge of the FTC can rake a company over the coals, claiming a failure to meet that duty of care.

It seems strange that Republicans, who seem to hate Lina Khan, now want to give her the power to go after Elon Musk’s ExTwitter for failing to properly protect users. But that’s what they’ll do.

On the flip side, why are Democrats giving a potential future Trump FTC the power to go after any website that is too “woke” by enabling LGBTQ content and thus failing its “duty of care” to protect the children?

Like so many powerful would-be censors, they only think about how exciting that censorship power will be in their own hands, and not in the hands of their political opponents.

Schumer is also bringing COPPA 2.0 to the floor. As we explained earlier this year, COPPA 2.0 basically takes the already problematic COPPA and makes it much worse. It might not be as inherently harmful as KOSA, but it’s still pretty harmful.

For one, this is just going to lead to more sites trying to ban teenagers from using their apps entirely, since it raises the age of restrictions from 13 to 16… and that will just mean more teens being taught to lie about their age.

Second, it effectively mandates privacy-destroying age verification by banning targeted ads to kids. But how do you know they’re kids unless you verify their ages? This idea is so short-sighted. The only way to ban “targeted” ads based on collected data is to first… collect all the same data. That seems like a real issue.

In addition, it will change the important “actual knowledge” standard for covered platforms (which is kinda necessary to keep it constitutional) to a “reasonably likely to be used” standard, meaning that even if websites make every effort to keep kids off their platform, all an enforcer needs to do is argue that they haven’t done enough because the platform was “reasonably likely to be used by” kids.

Both of these are “do something” bills. “Here’s a problem, we should do something, this is something.” They are something. They won’t help solve the problems, and are quite likely to make them worse.

But, politicians want the headlines about how they’re “protecting the children” which is exactly what the big news orgs will falsely repeat. What they should be noting is that these bills are about politicians cynically using children as props to pretend to do something.

Senators Marsha Blackburn (who said quite clearly that she wrote KOSA to “protect children from the transgender”) and Richard Blumenthal (who has made it clear that he’d just as soon kill the internet if it got him headlines) put out an obnoxious, exploitative statement about how this will save the children, when it will actually do tremendous harm to them.

Some questions remain about what will happen on the House side, as Speaker Mike Johnson has said they’ll look over whatever the Senate sends. But the existing House version of KOSA, while somewhat different than the Senate version, is equally problematic.

If you’d like to reach out to your elected officials in Congress about these bills, Fight for the Future has the StopKOSA website that includes a way to send emails. And EFF also has their own action center to contact your elected officials regarding KOSA.

Posted on Techdirt - 23 July 2024 @ 01:40pm

Ctrl-Alt-Speech Spotlight: Modulate CEO Mike Pappas On Voice Moderation

Ctrl-Alt-Speech is a weekly podcast about the latest news in online speech, from Mike Masnick and Everything in Moderation‘s Ben Whitelaw.

Subscribe now on Apple Podcasts, Overcast, Spotify, Pocket Casts, YouTube, or your podcast app of choice — or go straight to the RSS feed.

In this sponsored Spotlight episode of Ctrl-Alt-Speech, host Ben Whitelaw talks to Mike Pappas, the founder & CEO of our launch sponsor Modulate, which builds prosocial voice technology that combats online toxicity and elevates the health and safety of online communities. Their conversation takes an in-depth look at how voice is becoming an increasingly important medium for online speech while technology is making more advanced voice moderation possible, and what that means for trust and safety.

This episode is brought to you with financial support from the Future of Online Trust & Safety Fund, and our sponsor Modulate.

Posted on Techdirt - 23 July 2024 @ 12:08pm

Congress Wants To Let Private Companies Own The Law

It sounds absolutely batty that there is a strong, bipartisan push to lock up aspects of our law behind copyright. But it’s happening. Even worse, the push is on to include this effort to lock up the law in the “must pass” National Defense Authorization Act (NDAA). This is the bill that Congress lights up like a Christmas tree with the various bills they know they can’t pass normally, every year.

And this year, they’re pushing the Pro Codes Act, a dangerous bill to lock up the law that has bipartisan support. The House bill is being pushed by Darrell Issa (who was once, long ago, good on copyright law) and in the Senate by Chris Coons (who has always been terrible on copyright law). We wrote about the many problems of the Pro Codes Act back in April, but Issa has still submitted it as an amendment for the NDAA (it’s Amendment 1082, so you have a bunch of scrolling to get there).

Image

We’ve discussed a lot of this before, but it’s pretty deep in the wonky weeds, so let’s do a quick refresher. There are lots of standards out there, often developed by industry groups. These standards can be on all sorts of subjects, such as building codes or consumer safety or indicators for hazardous materials. The list goes on and on and on. Indeed, the National Institute of Standards and Technology has a database of over 27,000 such standards that are “included by reference” into law.

This is where things get wonky. Since many of these standards are put together by private organizations (companies, standards bodies, whatever), some of them could qualify for copyright. But, then, lawmakers will often require certain products and services to meet those standards. That is, the laws will “reference” those standards (for example, how to have a building be built in a safe or non-polluting manner).

Many people, myself included, believe that the law must be public. How can the rule of law make any sense at all if the public cannot freely access and read the law? Thus, we believe that when a standard gets “incorporated by reference” into the law, it should become public domain, for the simple fact that the law itself must be public domain.

This issue has come up in court many times in the past few years, mostly led by Carl Malamud and his Public.Resource.Org, that spent years trying to share various laws to make sure that the citizenry was properly informed. And yet he has been sued multiple times by those who claim their standards are private and covered by copyright.

Four years ago, there was a big victory when the Supreme Court sided with Malamud in a similar (but not identical) case regarding how the state of Georgia published its laws. In that case, Georgia partnered with a private publisher, Lexis Nexis, to publish the “Official Code of Georgia Annotated” and while Lexis would craft the “annotations,” the state of Georgia still considered the OCGA as the only truly “official” version of the law. When Malamud tried to publish his own version of the OCGA to make it more accessible, he was sued. But the Supreme Court made it clear that copyright cannot apply to “government edicts” and notes:

The animating principle behind this rule is that no one can own the law. “Every citizen is presumed to know the law,” and “it needs no argument to show . . . that all should have free access” to its contents.

Still, that did not get at the specific issue of “incorporation by reference” which is at the heart of some of Malamud’s other cases. Two years ago, there was a pretty big victory, noting that his publishing of standards that are “incorporated by reference” is fair use.

But industry standards bodies hate this, because often a large part of their own revenue stream comes from selling access to the standards they create, including those referenced by laws.

So they lobbied Congress to push this Pro Codes Act, which explicitly says that technical standards incorporated by reference retain copyright. To try to stave off criticism (and to mischaracterize the bill publicly), the law says that standards bodies retain the copyright if the standards body makes the standard available on a free publicly accessible online source.

A standard to which copyright protection subsists under section 102(a) at the time of its fixation shall retain such protection, notwithstanding that the standard is incorporated by reference, if the applicable standards development organization, within a reasonable period of time after obtaining actual or constructive notice that the standard has been incorporated by reference, makes all portions of the standard so incorporated publicly accessible online at no monetary cost.

They added this last part to head off criticism that the law is “locked up.” They say things like “see, under this law, the law has to be freely available online.”

But that’s missing the point. It still means that the law itself is only available from one source, in one format. And while it has to be “publicly accessible online at no monetary cost,” that does not mean that it has to be publicly accessible in an easy or useful manner. It does not mean that there won’t be limitations on access or usage.

It is locking up the law.

But, because the law says that those standards must be released online free of cost, it allows the supporters of this law, like Issa, to falsely portray the law as “enhancing public access” to the laws.

That’s a lie.

If we recognize standards incorporated by reference as being public domain, that enhances access. It allows the law to be published and shared by anyone. It allows the law to be presented in different formats and systems and in ways that are actually useful to more people, rather than relying on the one single source (the one who often has incentives to make it limited and hard to access, buried behind questionable terms of service).

On top of that, the idea that this law belongs in the NDAA is ludicrous. It flies in the face of the very fundamental concept that “no one can own the law,” as the Supreme Court itself recently said. And to try and shove it into a must pass bill about funding the military is just ridiculously cynical, while demonstrating that its backers know it can’t pass through regular process.

Instead, this is an attempt by Congress to say, yes, some companies do get to own the law, so long as they put up a limited, difficult to use website by which you can see parts of the law.

Library groups and civil society groups are pushing back on this (disclaimer: we signed onto this letter). Please add your voice and tell Congress not to lock up the law.

Posted on Techdirt - 23 July 2024 @ 09:25am

DSA Ruling: ExTwitter Must Pay Up For Shadowbanning; Trolls Rejoice

In a stunning display of technocratic incompetence, the EU’s Digital Services Act (DSA) has effectively outlawed the very tool that online platforms have relied on for years to combat trolls: shadowbanning. Recent court decisions suggest that the DSA’s (possibly?) well-intentioned but misguided Article 17 has created a troll’s paradise, leaving websites in an impossible position when trying to deal with bad actors. I will note that the DSA’s authors were warned of this in advance by multiple experts, but they either didn’t care or didn’t listen.

But before we get into the details, we need to take a step back and remind people that the general understanding of shadowbanning changed dramatically five or six years ago. This change mostly happened because a bunch of Trumpists got angry that they weren’t getting enough free promotion and decided that was a form of shadowbanning. It’s not.

The original concept of “shadowbanning” was as a method of dealing with trolls who were only in it to get reactions from other users in forums. People realized that banning them wouldn’t work, since they’d just make new accounts and come back. Convincing everyone else not to respond wouldn’t work, because that runs against human nature.

The concept of shadowbanning goes back to some of the earliest parts of the internet. It was a way to deal with those trolls by making the troll think their efforts had reached the site, but no other user could actually see it. Just the troll. So the troll thinks they’ve posted… and just that no one is responding. Thus, they don’t get their troll dopamine hit and hopefully give up. The reality, though, is that none of the other users saw it at all.

However, the key bit here is that the “shadow” part of shadowbanning has to be about the user not knowing they were banned. Otherwise, it’s just a ban.

In 2018, Trumpist folks started complaining that they weren’t getting promoted high enough in search results or other algorithms. They misunderstood the nature of “downranking” nonsense in an algorithm to be something evil and awful, and (because why understand what things actually are?) declared that to be “shadowbanning.”

It’s now so widely used to mean that kind of visibility filtering/algorithmic adjustment that the term is now effectively meaningless.

Nonetheless, it’s beginning to look like the EU might not allow any kind of “shadowbanning.” A couple of months ago, we wrote about a Belgian court punishing Meta for allegedly “shadowbanning” a controversial extremist politician. In that case, the court found that Meta couldn’t “justify” the downranking of the politician and argued that the downranking was based on the politician’s political views, and profiling someone based on their political views apparently violates the GDPR.

However, TechCrunch recently reported on another, different case, this time in the Netherlands, in which a PhD student, Danny Mekic, took ExTwitter to court for having “visibility filtering” applied to his account without being told about it.

Now, again, some background here is important. Before taking over Twitter, Elon decried the evils of “shadowbanning” at the company. He insisted (incorrectly) that it went against free speech, democracy, and all things good and holy. Indeed, one of the big “Twitter Files” misleading reveals was that the company did what it called “visibility filtering” — which everyone in the Elon realm of alternative facts seemed to forget was something the company publicly announced and was covered in the media back in 2018.

Hilariously, at the same time Musk was pushing those very Twitter Files that (1) revealed that the company was using the thing it had publicly said it was going to use nearly five years earlier while (2) insisting this was a big, secret revelation of bad behavior… Elon was making use of those very tools to hide accounts he didn’t like, such as ElonJet.

Indeed, soon afterwards, Elon (without recognizing any of the irony at all) announced that this “visibility filtering” (what his friends called shadowbanning) would be a key part of moderation on Twitter.

Image

So, the new Twitter policy was the old Twitter policy, which had been announced in 2018, and which Elon insisted was horrible and had to be “revealed” via a “Twitter Files” dump, and which he had to overpay to buy the company to stop… just to announce that it was now the official policy under his regime.

A few months later, the company announced that it would ramp up that shadowban… er… visibility filtering program, but it promised that it would be transparent about it and let you know:

Restricting the reach of Tweets, also known as visibility filtering, is one of our existing enforcement actions that allows us to move beyond the binary “leave up versus take down” approach to content moderation. However, like other social platforms, we have not historically been transparent when we’ve taken this action. Starting soon, we will add publicly visible labels to Tweets identified as potentially violating our policies letting you know we’ve limited their visibility.

And, indeed, every so often people get slapped with a “publicly visible label” that just seems to make them even angrier.

But, according to Mekic, he believed his account was visibility filtered without being notified. According to TechCrunch’s summary:

PhD student Danny Mekić took action after he discovered X had applied visibility restrictions to his account in October last year. The company applied restrictions after he had shared a news article about an area of law he was researching, related to the bloc’s proposal to scan citizens’ private messages for child sexual abuse material (CSAM). X did not notify it had shadowbanned his account — which is one of the issues the litigation focused on.

Mekić only noticed his account had been impacted with restrictions when third parties contacted him to say they could no longer see his replies or find his account in search suggestions.

For what it’s worth, the company claims it notified him multiple times.

It appears that he then went to the equivalent of a small claims court in the Netherlands to argue that not being told violated the DSA because the DSA’s Article 17 requires that a service provider give users “a statement of reasons” for “any restrictions of the visibility of specific items of information provided by the recipient of the service.”

For years, we’ve pointed out how ridiculous this is. It basically means that sites need to explain to trolls why they removed their content or made it harder to read.

But it could also mean that actual shadowbanning (taking action against a malicious actor that they don’t know about) appears to be effectively outlawed. The court ruling is in Dutch, but the court appears to have sided with Mekic, and basically said that if you shadowban someone, you need to tell them, in fairly great detail what happened and why. And telling them has a bunch of requirements, all of which would undermine shadowbanning. Apparently, even though Mekic was notified, the explanation apparently wasn’t clear enough for this court.

Which means there is no such thing as shadowbanning anymore.

It’s not shadowbanning if it’s not in the “shadow.” If you have to tell the shadowbanned about the shadowban, it’s no longer shadowbanning. It’s just a “hey, troll, come yell at me” notice.

From a translation of the ruling:

Contrary to Twitter’s argument, the subdistrict court judge considers that the restrictions imposed by it on [the applicant] fall under a restriction of visibility as referred to in Article 17, paragraph 1a, DSA. It has remained undisputed that the 64 million users of X in Europe were able to see [the applicant]’s messages in a reduced manner, which clearly constitutes a restriction within the meaning of that provision. The fact that not all 64 million users searched for [the applicant]’s account during that period does not alter this. The same applies to the fact that, as Twitter has noted, there was no restriction of the visibility of specific information provided by [the applicant], but a restriction of his entire account, this interpretation is not followed. After all, the greater, the reduced visibility of the entire account, entails the lesser, the reduced visibility of the specific information.

In the ruling itself, the story seems even worse because ExTwitter did, in fact, give the guy an explanation. But the judge says it wasn’t specific enough.

According to Twitter, it provided three messages to [the applicant] about the measure, on15 October 2023, 14 November 2023 and 12 January 2024, thereby meeting the requirements of Article 17 DSA. [The applicant] first contested that hereceived the message of 14 November 2023. Since Twitter has not provided any evidence that this message reached [the applicant] and has also not made any concrete offer of evidence, this message will be disregarded in the assessment. However, even if [the applicant] had received this message, it does not comply with Article 17 DSA, since this message is formulated far too generally and does not contain any specific information referred to in Article 17 DSA.

The other two messages do not comply with the provisions of Article 17 paragraph 3 DSA. The email message of 15 October 2023 does not contain any information as referred to in Article 17 DSA. [Applicant] cannot infer from this message that a measure has been taken and which measure has been taken (sub a), why a possible measure would have been taken and which facts and circumstances are involved (sub b). Nor is anything stated about the legal basis (see sub d). Finally, the information referred to in sub f is also missing. The mere reference to the Help Center in the email cannot be regarded as such a notification. This email therefore does not meet the requirements of Article 17 paragraph 3 DSA or paragraph 4. This information is not clear, easy to understand and in any case not such that [Applicant] can exercise any rights of recourse that may be due to him. The message from Twitter of 12 January 2024 is also not fully compliant. That message also does not contain any information as referred to under sub f. It does otherwise state that there was a temporary restriction, although the extent of that restriction is not stated. It also states that a few days later X lifted the temporary restriction on [applicant]’s account. Although a specific date is missing, [applicant] could at least infer from this that these restrictions no longer applied on 12 January 2024.

This is just a “small claims” dispute, so I’m guessing it has little to no precedential value. But combined with that other ruling in Belgium, and the text of Article 17 itself, this is going to create a freaking field day for trolls in the EU.

So… now that tool that has been used for decades, mainly to deal with trolls, is basically no longer possible. If you take an action in the EU against a troll, you have to tell them about it. This bit of the law was clearly written by people who have never, ever had to deal with trolls. Because trolls will absolutely love this feature. They can whine incessantly and threaten legal process if you don’t give them a clear statement (which they can argue with) regarding what you did to them and why.

I know that people (especially in the EU) complain that my coverage of the DSA is unfair. But when you get results like this, what else am I supposed to say about the DSA? Sections like Article 17 are designed to deal with a world where everyone is acting in good faith. And, in doing so, empowers the trolls and harms the ability of websites to deal with trolls.

More posts from Mike Masnick >>