MetaFilter, SCOTUS, and Section 230 February 20, 2023 12:13 PM   Subscribe

So, there are a couple of big cases coming up before the Supreme Court in the next week or so that might lead to the overturning of Section 230 of whatever law it's part of, I think the 1996 Digital Millennium Copyright Act. I'm wondering if the powers that be at MetaFilter have been looking at this and what the implications might be for our little community if this does end up getting overturned. Members of the community might have further insights.
posted by hippybear to MetaFilter-Related at 12:13 PM (32 comments total) 7 users marked this as a favorite

We all get cake?
IANYL
posted by From Bklyn at 12:58 PM on February 20, 2023 [4 favorites]


I have been eyeing this case nervously, too, because it seems like it would be such a HUGE interruption to...well, the entire web, these days.

I would be very grateful for some informed perspective!
posted by wenestvedt at 1:04 PM on February 20, 2023 [3 favorites]


Could someone briefly explain what this is about?
posted by NotLost at 2:45 PM on February 20, 2023


So, my understanding (which was greatly developed by listening to today's episode of the Strict Scrutiny podcast, transcript in link) is that Section 230 (of the Communications Decency Act, not the other law I mentioned in the post) offers online platforms two protections: 1) the platforms are not considered "publishers" and are not responsible for the content on their platform posted by users, and 2) it offers protections to platforms that edit user-posted content to get rid of unwanted content (at the time, mostly child porn was the issue).

The two cases coming up before the court right now are Google vs. Gonzalez, which has to do with Google owning YouTube and YouTube being a host for ISIS videos showing the execution of people, and placing advertisements with those videos. Which leads to YouTube profiting from snuff videos posted by a terrorist organization.

The second case is Twitter vs Taamneh which also has to do with terrorism-related content being posted to a website by a third party (user).

The reason these cases are getting a lot of political traction is because the current MAGA/Q thing is that services like twitter are "censoring" right wing content and if we eliminate Section 230 they won't be able to do this anymore. It's a misreading of the law that has gained traction and given these cases the fuel they need to reach SCOTUS.
posted by hippybear at 3:05 PM on February 20, 2023 [10 favorites]


Another perspective offered here in the New York Times opinion section, pointing out that a lot of the bigger tech folk have been using Section 230 to shield themselves from complicity when someone catches them using discriminatory practices in their ad sales.... Stuff like - buying an ad on Facebook promoting apartments for rent, but adding the request that Facebook only show it to white people. Or asking Facebook to screen job postings from people above a given age.
posted by EmpressCallipygos at 3:42 PM on February 20, 2023 [7 favorites]


Google vs. Gonzalez, which has to do with Google owning YouTube and YouTube being a host for ISIS videos showing the execution of people, and placing advertisements with those videos. Which leads to YouTube profiting from snuff videos posted by a terrorist organization.

This misses the biggest part. It's not just that they're hosting, or that they're placing advertisements with those videos, or that they're profiting from those advertisements. It's also that ISIS is profiting as a result of those advertisements, by the way. But more importantly imo - it's that their recommendation algorithms proactively direct the kinds of people Google thinks would be interested in those videos, toward those videos. So the argument goes beyond profit, to say that Google, with their recommendation algorithms, is actively aiding in recruitment efforts for a terrorist organization. It's bringing in people to watch those videos who would otherwise have not watched the videos.

And the argument is that this kind of thing goes way beyond Section 230. Google says that Section 230 protects them, because they're not considered publishers of the ISIS video and cannot be held liable for that content. But the counter-argument to that is that this isn't really about the content of the video at all. It's about the proactive steps Google is taking in calling attention to videos and recommending them to people who might be susceptible to ISIS's messages. So in this sense, Section 230 is just a red herring and not relevant.

I believe I'm sympathetic to this line of thinking. We don't need to touch Section 230 at all to find it troubling that tech companies are putting recommendation algorithms out there with far-reaching consequences, and think they can get off the hook for them entirely and wipe their hands clean. At some level there needs to be accountability. Aiding and abetting terrorist organizations is one of the more obvious instances in which you can see the ramifications. But there's also other instances you could possibly extrapolate from this, like Google's/Youtube's recommendation algorithms directing impressionable young white boys into violent MRA ideologies and into joining hate groups and militant white supremacists, etc. That's essentially the #1 recruitment method of these groups right now, exploiting the recommendation algos to get innocent but impressionable-and-susceptible eyes and ears on them.
posted by naju at 4:19 PM on February 20, 2023 [20 favorites]


Yeah, YouTube's recommendations algorithm is legitimately a sociopath, in the truest sense of the word (i.e. "pernicious to society"). I have been trying for months to get the goddam thing to understand that I will never be interested in Joe Rogan or Jordan Peterson and it just keeps finding new channels focusing on them to recommend anyway, and lately it's added in fundamentalist bullshit as well (how Genesis "disproves" the existence of dinosaurs or somesuch was one of the latest delights. I think maybe this was because I watched some Mahalia Jackson and Staple Singers clips a while back? That doesn't mean I disbelieve science. It's pretty music. JFC).

The whole thing reminds me of my parents trying repeatedly to convince me as a kid to try some goddam liver or cottage cheese or something else I had already said repeatedly that I don't like. The difference there is that I loved my parents--as complicated a relationship as that was--and I just want each and every person who ever led to the YouTube recommendation algorithm being what it is right now to go drown in a sewer.

Anyway. I feel like, yeah, Section 230 and monetizing people's clips and YouTube's recommendation algorithm are all different things, and all problematic in their own distinct ways. Section 230 at least allowed user-generated content, which can be a wonderful thing when your experience of it isn't curated by latent fascists (which, increasingly, all the major platforms seem to think is actually a neat idea worth testing).
posted by johnofjack at 4:59 PM on February 20, 2023 [12 favorites]


Techdirt has a decent explainer of what’s at stake, and it links to some more in depth arguments.
posted by gemmy at 9:47 PM on February 20, 2023 [4 favorites]


johnofjack: Yeah, YouTube's recommendations algorithm is legitimately a sociopath, in the truest sense of the word (i.e. "pernicious to society"). I have been trying for months to get the goddam thing to understand that I will never be interested in Joe Rogan or Jordan Peterson and it just keeps finding new channels focusing on them to recommend anyway

All of that bullshit can be sidestepped quite deftly by using a program like FreeTube to watch YT vids.
posted by Too-Ticky at 2:41 AM on February 21, 2023 [3 favorites]


I was hoping someone would have a ready-made answer to hippybear's question, since concerns about 230 have been around for ages. At a time when the community is trying to chart a new course with the steering committee, it's a little scary that the law could suddenly shift the ground under our feet. What happens to moderation without 230 in place? Will there be more deletions--such as a nightmare scenario where posts are deleted for violating state morality laws? Or is that an unreasonable fear?
posted by mittens at 4:46 AM on February 21, 2023 [1 favorite]


Is there really any chance of a change in the law that puts the enormous profits of internet and telecom companies at risk?
posted by seanmpuckett at 5:10 AM on February 21, 2023 [3 favorites]


This really seems like a question that can't be answered a) in advance of the facts b) by anyone who isn't a specialist attorney.
posted by restless_nomad (retired) at 6:51 AM on February 21, 2023 [11 favorites]


YouTube's recommendations algorithm is legitimately a sociopath, in the truest sense of the word

I have no doubt this is true but I use YouTube in place of tv (I watch a LOT of YouTube), and I only very rarely see any of this toxic stuff - like one conservative-type video (what I assume is a very mild version of the stuff mentioned here) will be suggested in a month and I tell it to f*ck off. Maybe because I kicked off almost ALL political stuff including current news? I follow just a couple liberal political accounts and The Kyiv Independent, but otherwise it's all art and dog and donkey videos, with some mudlarking, film critiques, vegan and music stuff.

I'm always surprised when people talk about how ubiquitous the toxic content is, and how it is impossible to avoid. Which totally sucks because I really enjoy Youtube. naju's comment is horrifying and of course YouTube should be held responsible and required to fix that.
posted by Glinn at 7:04 AM on February 21, 2023 [4 favorites]


Is there really any chance of a change in the law that puts the enormous profits of internet and telecom companies at risk?

It really depends on: which of their interests do they want to bootlick to? We have a runwaway court that can pass things without even Robert's vote, so I'm just constantly preparing for terror.
posted by corb at 7:36 AM on February 21, 2023 [8 favorites]


Approaching this from a perspective of maximum cynicism, which is warranted by the current Supreme Court majority, is there a way for these cases to be a vehicle that gives the far right exactly what they want:

1. The Algorithm remains intact. Fascists love The Algorithm; it’s a great recruitment tool.

2. It becomes illegal to use moderation to restrict far-right content. This will not protect other political viewpoints because it will not be applied evenly. There will be a carve-out that allows continued removal of CSAM and other obviously illegal content, but conservatives threats of violence will be protected.

3. This may apply only to tech giants who are deemed “common carriers”. MetaFilter will be fine; Twitter, Facebook, and TikTok will burn as they’re flooded with fascist content and eventually abandoned by advertisers and other users. Little change for Google/YouTube since it doesn’t moderate much anyway. All Amazon reviews are now basically copy-pasted from Stormfront.

4. Or maybe it applies to everyone and MetaFilter has to go dark or risk being sued into a crater by neo-Nazis. It’s hard to imagine capitalists would allow the internet to be destroyed by the Supreme Court, but weirder shit has happened in the modern era.
posted by qxntpqbbbqxl at 7:36 AM on February 21, 2023 [7 favorites]


what the implications might be for our little community

Basically, MetaFilter can't exist without protections from Section 230. I'm recalling when Eyebrows reminded everyone (including me) about why, in their legal view, MetaFilter can't be a 501(c)3 organization.
It is not possible to have MetaFilter exist as a place where people talk about politics or a place where people share links, and also be a 501(c)3.
Those same reasons are why the site needs Section 230. Without it, Jessamyn could be legally exposed for every link anyone has ever posted here.
posted by The Pluto Gangsta at 7:55 AM on February 21, 2023 [4 favorites]


Let's all just hope the deciding court flies a flag with a fringe.
posted by chavenet at 8:52 AM on February 21, 2023 [4 favorites]


This really seems like a question that can't be answered a) in advance of the facts b) by anyone who isn't a specialist attorney.

Agreed. Personally I think speculating is likely to just get everyone wound up.
posted by terrapin at 10:14 AM on February 21, 2023 [3 favorites]


Without it, Jessamyn could be legally exposed for every link anyone has ever posted here.

Yep. I have been keeping an eye on what's up but generally speaking I am not super wound up about it. We have access to good counsel if we need it. Significantly bigger fish than us have more to worry about. I'm not being cavalier, just saying we're not going to borrow worry about it.
posted by jessamyn (staff) at 10:18 AM on February 21, 2023 [23 favorites]


Without it, Jessamyn could be legally exposed for every link anyone has ever posted here.

This is the only comment I'm posting in this thread, but as a law-talking person I feel compelled to state that this is absolute nonsense. Or at the very least rests on so many unstated assumptions as to be indistinguishable from absolute nonsense.
posted by Not A Thing at 10:44 AM on February 21, 2023 [8 favorites]


Lawfare Podcast has a pretty great discussion about the ins and outs of this whole thing.
posted by From Bklyn at 12:43 PM on February 21, 2023 [2 favorites]


This is a pretty clear write-up of the issues presented in the Supreme Court cases. The Supreme Court can do anything it wants, including going beyond the question presented, but there is reason to think that the issues to be decided will not be the types of issues that affect MetaFilter. The central issues in the cases relate to using algorithms to suggest content (not an issue here) and anti-terrorism liability for broadcasting terrorists' information (probably a remote issue here). That's not to say that this stuff isn't important - and there could be more cases to come -- but it seems unlikely that the Supreme Court's decision will immediately expose MeFi (much less its owner--hello corporate distinctiveness) to liability.
posted by Mid at 12:49 PM on February 21, 2023 [6 favorites]


That's my take too, Mid. In any case, while I think it's all pretty interesting, I agree with the general tenor that it's not worth MeFi getting worked up about this at the moment.
posted by naju at 1:39 PM on February 21, 2023


Legal Eagle has a video about the issue. The interesting thing I took from it was his opinion that if the court finds recommendation algorithms remove 230 protections, that may extend to search functionality. It'll be interesting to see.
posted by vibratory manner of working at 2:54 PM on February 21, 2023 [1 favorite]


I'm always surprised when people talk about how ubiquitous the toxic content is, and how it is impossible to avoid.

I used to get a fair number of objectionable YouTube recs but it got better, at the expense of getting blander - it feels like they toned down the feedback loop a bit but try to push more stuff that’s broadly popular regardless of relevance. And sometimes it will arbitrarily decide, having pegged me as into “guy stuff” I suppose, that I ought be interested in channels that are called like “Matt’s Awesomesauce Beer and Cigar Reviews.”

Note that this is as someone who studiously does not like or subscribe or necessarily even log in. And I’m not holding the occasional Joe Rogan video against it because I’ve definitely intentionally watched clips of comedians who have been on his show I mean he’s just so big it’s pretty hard to disentangle him from the rest of media.
posted by atoxyl at 9:14 PM on February 21, 2023 [1 favorite]


Actually the one that bugged me the most in recent memory was when it wouldn’t stop showing that John Campbell guy who somehow became a COVID “expert.” But it’s still not like the “destroys feminist” days.
posted by atoxyl at 9:17 PM on February 21, 2023


try to push more stuff that’s broadly popular

or maybe less that and more “certified content partners”
posted by atoxyl at 9:26 PM on February 21, 2023 [1 favorite]


Now that you mention it, I did get Joe Rogan content videos for much longer than I liked, but that too eventually stopped. I do pay for YouTube monthly, I should probably mention. I may be paying my way to less toxic content which makes sense and how silly not to realize that sooner. I do get some of that broader appeal blander stuff which is annoying but not enraging, at least.
posted by Glinn at 10:17 PM on February 21, 2023 [2 favorites]


I'm always surprised when people talk about how ubiquitous the toxic content is, and how it is impossible to avoid.

It depends on which areas of YouTube you hang around in.

I started a YouTube channel because I was going to make videos about electronics. So while logged into that channel's account, I subscribed to a bunch of popular channels about electronics and Arduino and so on, and watched a bunch of their videos. Nothing remotely political, just DIY electronics videos.

By Day 2 I was getting non-stop recommendations on that channel for pro-Johnny-Depp stuff (his court thing was happening at the time) and Jordan Peterson and Joe Rogan and Prager U and even worse things. I'd click "Not Interested" on them but they kept coming. Commercials for them would air at the beginning of the electronics videos I watched.

This pretty much convinced me not to bother starting that channel. I might still do it some day and I'd love to "be the change" and so on but it just seems disheartening.

To be fair, that was about 2 years ago. I just checked that account now and it's a mix of electronics videos and mainstream stuff with only a couple of red flags. So maybe things have improved, but I'll readily believe this can still happen to people who get filed into a particular category.
posted by mmoncur at 2:15 AM on February 22, 2023 [6 favorites]


So maybe things have improved, but I'll readily believe this can still happen to people who get filed into a particular category.

Just spitballing but imagine:

On new account creation, start with those recommendations that have the best stats for generating engagement/income. Followed by self-reinforcing behaviour (you didn't engage with them, don't bother, maybe try the expressed interests?).
posted by pompomtom at 3:13 AM on February 22, 2023


I'm surprised there's not a post on the Blue about this unless I missed it.
posted by pelvicsorcery at 9:55 AM on February 24, 2023


Moderators are hired to help a website (such as ours) avoid certain snake pits, and encourage civility. We don't have to deal with algorithms that steer unwanted messages to our members. So, "They" are not coming for us. "They" are coming for "Them." As it stands for us, on a practical level, we take out our own trash.

I've seen unmoderated BBs and discussion sites crash and burn because, well, people be people. Even here we struggle with rules from time to time. I'm at a loss to envision a theory that draws a bright line between freedom and safety. The First Amendment is a tool so powerful that it can be used as a weapon. The "shouting 'fire' in a crowded theater" analogy fails to cover the ground discussed in this thread.

In a broader view, our country's divisiveness is already drifting dangerously into the "Good King" version of governance. We liberals can't wait for the trifecta of Democrats to be elected so we can get our ducks lined up, hoping we can retain power until the damned Republicans in SCOTUS finally die off or retire so we can appoint some Democrats who'll do the right thing. Our fear is that our enemy, the Republicans, will gain that much desired hegemony and overwhelm us with their ignorant, possibly evil, doctrines.

I can't wrap my mind around the concept of carefully guided freedom of expression. I am aware of the desire--and need--to protect the vulnerable, but the ways and means to do so require something lying just beyond my ability to comprehend. Parental controls on our children's computer makes sense. But it goes only halfway there if you are talking about protecting people other than yourself. China and Russia have solutions for keeping their citizens safe from unwanted internet traffic. How do we do that in America?
posted by mule98J at 5:03 PM on February 28, 2023


« Older Metatalktail Hour: The cheapest, but the best   |   Moderation Log Newer »

You are not logged in, either login or create an account to post comments