Do replies need a character limit? February 25, 2025 5:46 AM Subscribe
Just because your reply has a lot of characters, that doesn't mean you have character, am I right. Okay.
I've noticed a trend towards filibustering commentary here. I myself am prone to the occasional long reply. And look, sometimes we have a lot to say. But when these replies hit -- the "buckle up, here comes some game theory" replies -- I suspect most of us just check out. I know I do.
We all have different ideas of what a long reply is. And I know that one could be necessary; sometimes they're even great! But I also think all of us agree that a reply CAN BE too long. It's probably not necessarily to cite examples. We all know.
My questions are: should we have a character limit for replies, and if so, how many characters. I'm thinking quite a few, but probably not like. Three million.
I rescind this post if, God help me, there IS a character limit. If there is...I mean, there isn't, right? Because...no. There couldn't be. ...Right?
I've noticed a trend towards filibustering commentary here. I myself am prone to the occasional long reply. And look, sometimes we have a lot to say. But when these replies hit -- the "buckle up, here comes some game theory" replies -- I suspect most of us just check out. I know I do.
We all have different ideas of what a long reply is. And I know that one could be necessary; sometimes they're even great! But I also think all of us agree that a reply CAN BE too long. It's probably not necessarily to cite examples. We all know.
My questions are: should we have a character limit for replies, and if so, how many characters. I'm thinking quite a few, but probably not like. Three million.
I rescind this post if, God help me, there IS a character limit. If there is...I mean, there isn't, right? Because...no. There couldn't be. ...Right?
Now that I've gotten the character limited version out of the way: even you say that sometimes a long comment is valuable. How would you set the limit to a number that allows for those valuable comments, while still discouraging the ones you dislike?
Choosing whether to read a comment, or whether to scroll past it (j/k and maybe ./, scroll on comment pages, as a tip) is still possible, especially if your only issue is that the commenter bores you.
posted by sagc at 5:54 AM on February 25 [3 favorites]
Choosing whether to read a comment, or whether to scroll past it (j/k and maybe ./, scroll on comment pages, as a tip) is still possible, especially if your only issue is that the commenter bores you.
posted by sagc at 5:54 AM on February 25 [3 favorites]
Fuck no. A character limit? If something is too long for you to want to read, don't read it.
posted by fennario at 5:55 AM on February 25 [19 favorites]
posted by fennario at 5:55 AM on February 25 [19 favorites]
Also, someone would have to check, but I don't think anyone has ever posted a comment they'd written themselves that's longer than the Treaty of Westphalia - so people probably aren't running up against any technical limits at the moment.
posted by sagc at 5:56 AM on February 25 [4 favorites]
posted by sagc at 5:56 AM on February 25 [4 favorites]
Is there an example you can point to where a character limit would have helped?
posted by Vatnesine at 5:56 AM on February 25 [3 favorites]
posted by Vatnesine at 5:56 AM on February 25 [3 favorites]
Huh?
posted by tiny frying pan at 6:04 AM on February 25 [1 favorite]
posted by tiny frying pan at 6:04 AM on February 25 [1 favorite]
Short answer, no. Long answer, noooooooooooooooooo.
posted by lucidium at 6:21 AM on February 25 [15 favorites]
posted by lucidium at 6:21 AM on February 25 [15 favorites]
Character limits are pointless. The only thing they are going to achieve is that people are going to break their posts up into multiple posts. If people have something to say, they are going to say it.
posted by Barry Boterman at 6:26 AM on February 25 [14 favorites]
posted by Barry Boterman at 6:26 AM on February 25 [14 favorites]
Agree with others, this is a nonstarter of an idea for me. Character limits are a technical solution to the social problem of community standards and moderation, i.e. irrelevant for Metafilter as currently constituted.
posted by dbx at 6:27 AM on February 25 [2 favorites]
posted by dbx at 6:27 AM on February 25 [2 favorites]
Absolutely not.
posted by bowbeacon at 6:32 AM on February 25 [1 favorite]
posted by bowbeacon at 6:32 AM on February 25 [1 favorite]
No. Filibusters are an important tell.
posted by A forgotten .plan file at 6:41 AM on February 25 [3 favorites]
posted by A forgotten .plan file at 6:41 AM on February 25 [3 favorites]
No.
posted by lawrencium at 6:42 AM on February 25 [1 favorite]
posted by lawrencium at 6:42 AM on February 25 [1 favorite]
Okey dokey!
posted by kittens for breakfast at 6:47 AM on February 25 [4 favorites]
posted by kittens for breakfast at 6:47 AM on February 25 [4 favorites]
I don't want to link to anything, because it's literally not a matter of "not liking" a long comment, it's a matter of not reading one. I will plow through some long ass comments but sometimes it's like, "Jesus Christ."
posted by kittens for breakfast at 6:49 AM on February 25 [4 favorites]
posted by kittens for breakfast at 6:49 AM on February 25 [4 favorites]
I like the ability to have long comments here. Even if I don't always read them, I think it contributes to the discourse.
posted by warriorqueen at 6:58 AM on February 25 [6 favorites]
posted by warriorqueen at 6:58 AM on February 25 [6 favorites]
Rough morning?
posted by wanderlost at 7:06 AM on February 25 [2 favorites]
posted by wanderlost at 7:06 AM on February 25 [2 favorites]
TL;DR
posted by grumpybear69 at 7:10 AM on February 25 [3 favorites]
posted by grumpybear69 at 7:10 AM on February 25 [3 favorites]
I don't really think so, no.
Although, between you and me, I sometimes think there should be a character limit on relationshp focused AskMe posts. I'm kidding! I'm kidding. Sort of.
posted by kbanas at 7:20 AM on February 25 [5 favorites]
Although, between you and me, I sometimes think there should be a character limit on relationshp focused AskMe posts. I'm kidding! I'm kidding. Sort of.
posted by kbanas at 7:20 AM on February 25 [5 favorites]
What I've noticed is that replies that *I* think are too long are usually favorited by others, meaning someone got something out of those long replies, even if I didn't care for the format. Different folks, different strokes, community, and all that.
If there was a limit, then we'd have to decide what the limit is and that seems unnecessary at this point, as there hasn't been a technical problem with reply length. As someone else mentioned, a reply can be at least as long as the Treaty of Westphalia, and that's plenty!
For the curious, here's stats on the Treaty of Westphalia (via)):
characters: 87941
words: 14808
sentences: 385
paragraphs: 261
cpaces: 14547
posted by Brandon Blatcher (staff) at 7:21 AM on February 25 [4 favorites]
If there was a limit, then we'd have to decide what the limit is and that seems unnecessary at this point, as there hasn't been a technical problem with reply length. As someone else mentioned, a reply can be at least as long as the Treaty of Westphalia, and that's plenty!
For the curious, here's stats on the Treaty of Westphalia (via)):
characters: 87941
words: 14808
sentences: 385
paragraphs: 261
cpaces: 14547
posted by Brandon Blatcher (staff) at 7:21 AM on February 25 [4 favorites]
Realistically there is already a character limit - strings can only be so long in the backing database. Depending on the software this limit could be 2Gb but I suspect that the server software would prevent comments much shorter than this from being POSTed in the first place.
More seriously, this is an interface problem. Long comments take up y-axis page real estate in a way that can be obnoxious if you just want to scroll past them. At least one other site allows comments of a certainly length and then hides the rest behind a [read more] button that expands the comment to full size.
Of course this has accessibility issues. But so do long comments in screen readers.
Personally I haven't noticed a problem with needlessly long comments here. I hate to say it, but this is probably more of a mod issue - a comment long enough be be truly disruptive should be removed.
posted by AndrewStephens at 7:26 AM on February 25 [1 favorite]
More seriously, this is an interface problem. Long comments take up y-axis page real estate in a way that can be obnoxious if you just want to scroll past them. At least one other site allows comments of a certainly length and then hides the rest behind a [read more] button that expands the comment to full size.
Of course this has accessibility issues. But so do long comments in screen readers.
Personally I haven't noticed a problem with needlessly long comments here. I hate to say it, but this is probably more of a mod issue - a comment long enough be be truly disruptive should be removed.
posted by AndrewStephens at 7:26 AM on February 25 [1 favorite]
There's no, there's hell no, and there's aw hell naw.
posted by Lemkin at 7:29 AM on February 25 [2 favorites]
posted by Lemkin at 7:29 AM on February 25 [2 favorites]
Character limits seem very...1998 internet, for a lack of a better way to put it. Seems like a solution in search of a problem that was solved long ago (database storage size). There's also a readability problem, of course, but the eyes can skip what the database cannot.
posted by pdb at 7:29 AM on February 25 [1 favorite]
posted by pdb at 7:29 AM on February 25 [1 favorite]
we might run out of creamed corn, but you will pry our verbosity from our cold, dead, hands
posted by ginger.beef at 7:41 AM on February 25 [10 favorites]
posted by ginger.beef at 7:41 AM on February 25 [10 favorites]
Please no
posted by obfuscation at 8:29 AM on February 25
posted by obfuscation at 8:29 AM on February 25
You are advocating a technical solution to a social problem. Your idea will not work. Here is why it won't work:
(X) Not everyone agrees with your definition of bad behavior
( ) You're denying yourself the ability to respond to ___________ which is worse than the original problem because: ______________
( ) Assholes are very good at standing right on the line and insisting they did nothing wrong
( ) ...
*posted by zamboni at 8:43 AM on February 25 [13 favorites]
Yeah, simplified and short is ALWAYS BEST.
Click here to start.
posted by lalochezia at 8:52 AM on February 25 [4 favorites]
Click here to start.
posted by lalochezia at 8:52 AM on February 25 [4 favorites]
No.
Character limits would have stopped flabdablet from posting these amazing responses describing how pressure relief valves and expansion tanks work in residential plumbing systems.
posted by jpeacock at 8:59 AM on February 25 [6 favorites]
Character limits would have stopped flabdablet from posting these amazing responses describing how pressure relief valves and expansion tanks work in residential plumbing systems.
posted by jpeacock at 8:59 AM on February 25 [6 favorites]
do you hear what I hear
what I hear kittens for breakfast saying is, Please respond to all my posts with excessively long sentences and convoluted, meandering digressions
posted by ginger.beef at 9:06 AM on February 25 [5 favorites]
what I hear kittens for breakfast saying is, Please respond to all my posts with excessively long sentences and convoluted, meandering digressions
posted by ginger.beef at 9:06 AM on February 25 [5 favorites]
I'm +1 for this just to mess up the consensus.
posted by Diskeater at 9:21 AM on February 25 [1 favorite]
posted by Diskeater at 9:21 AM on February 25 [1 favorite]
After a thorough review of my comment history, I'm comfortable with comments having a limit of 2531 characters. You may proceed.
posted by mittens at 9:34 AM on February 25 [3 favorites]
posted by mittens at 9:34 AM on February 25 [3 favorites]
Is there a way to add paragraph breaks after maybe 10 sentences or so? Because I find it really hard to read a wall of text without line breaks, and I'm sure I'm not the only one.
posted by Orkney Vole at 9:55 AM on February 25 [2 favorites]
posted by Orkney Vole at 9:55 AM on February 25 [2 favorites]
The loss of Loquacious' long rambling Tales of the Pacific Northwest would be a terrible thing, IMO.
Not everyone needs to read the long comments but they should stay for those who want to.
posted by supermedusa at 9:58 AM on February 25 [13 favorites]
Not everyone needs to read the long comments but they should stay for those who want to.
posted by supermedusa at 9:58 AM on February 25 [13 favorites]
Full disclosure, what tipped me over to the "create your first MetaTalk" point was the fact that I specifically keep seeing these verbiage dumps in threads about AI. I strongly suspect these are people just flooding the area with shit to kill the conversation. And by "shit" I mean chatbot babble.
posted by kittens for breakfast at 11:13 AM on February 25 [6 favorites]
posted by kittens for breakfast at 11:13 AM on February 25 [6 favorites]
I think this is a great example where a rule would be bad, but if the context is that chatbot input is being dumped in a thread, it warrants a look.
posted by warriorqueen at 11:19 AM on February 25 [10 favorites]
posted by warriorqueen at 11:19 AM on February 25 [10 favorites]
The idea of a character limit is so ridiculous, and would be so destructive to Metafilter that it's hard for me to believe the suggestion was made in good faith.
Are you trying to kill Metafilter kittens for breakfast? Do you really hate us that much?
posted by jamjam at 11:21 AM on February 25
Are you trying to kill Metafilter kittens for breakfast? Do you really hate us that much?
posted by jamjam at 11:21 AM on February 25
As it happens, jamjam, the truth is tha
posted by kittens for breakfast at 11:29 AM on February 25 [19 favorites]
posted by kittens for breakfast at 11:29 AM on February 25 [19 favorites]
> I specifically keep seeing these verbiage dumps in threads about AI. I strongly suspect these are people just flooding the area with shit to kill the conversation
OH. well if you'd just lead the thread with that it woulda been a complete flipflop in the other direction. don't nobody wanna hear that mess.
posted by glonous keming at 11:45 AM on February 25 [5 favorites]
OH. well if you'd just lead the thread with that it woulda been a complete flipflop in the other direction. don't nobody wanna hear that mess.
posted by glonous keming at 11:45 AM on February 25 [5 favorites]
And by "shit" I mean chatbot babble.
What on earth? We just had a thread about not including ChatGPT text in comments. Or are you saying...people talking about AI in threads about AI is...bad?
posted by mittens at 12:09 PM on February 25
What on earth? We just had a thread about not including ChatGPT text in comments. Or are you saying...people talking about AI in threads about AI is...bad?
posted by mittens at 12:09 PM on February 25
(For people who joined us in the last few years, here's why people are referring to the Treaty of Westphalia.)
posted by brainwane at 12:10 PM on February 25 [4 favorites]
posted by brainwane at 12:10 PM on February 25 [4 favorites]
It's probably not necessarily to cite examples. We all know.I would definitely be more receptive to your argument if you would link to specific comments that you find unreadably long.
I don't want to link to anything, because it's literally not a matter of "not liking" a long comment, it's a matter of not reading one.I'm having trouble understanding your reasoning; are you saying that, if you disliked the content of a comment, you would feel more inclined to link to an example, but since it's solely its length you object to, you don't want to link?
Full disclosure, what tipped me over to the "create your first MetaTalk" point was the fact that I specifically keep seeing these verbiage dumps in threads about AI. I strongly suspect these are people just flooding the area with shit to kill the conversation. And by "shit" I mean chatbot babble.Have you tried flagging those comments for moderator attention and indicating that you suspect that the contents are generated by LLMs/chatbots? If so, what happens after you do that?
Also, about how many different MeFites do you suspect of acting in this manner? If it's under, say, 5 people, then I'm much less inclined to support a sitewide rule shrinking the allowable comment length. We could instead ask the mods to have a word with those folks.
posted by brainwane at 12:18 PM on February 25 [2 favorites]
I miss Meatbomb.
posted by Mr. Yuck at 12:36 PM on February 25 [9 favorites]
posted by Mr. Yuck at 12:36 PM on February 25 [9 favorites]
As it happens, jamjam, the truth is tha
I can't tell whether you're modeling the behavior you desire or don't think I can handle the truth — and if it's the latter, you might be right!
posted by jamjam at 12:41 PM on February 25 [1 favorite]
I can't tell whether you're modeling the behavior you desire or don't think I can handle the truth — and if it's the latter, you might be right!
posted by jamjam at 12:41 PM on February 25 [1 favorite]
I rescind this post if, God help me, there IS a character limit. If there is...I mean, there isn't, right? Because...no. There couldn't be. ...Right?
Our comments are stored in a database, so there definitely is a limit of some kind. I guess if the Treaty of Westphalia can be posted, it's more than 65535 characters (216 - 1, a common limit). Only one way to find out.
Our comments are stored in a database, so there definitely is a limit of some kind. I guess if the Treaty of Westphalia can be posted, it's more than 65535 characters (216 - 1, a common limit). Only one way to find out.
Our comments are stored in
posted by ssg at 12:54 PM on February 25 [5 favorites]
Our comments are stored in a database, so there definitely is a limit of some kind. I guess if the Treaty of Westphalia can be posted, it's more than 65535 characters (216 - 1, a common limit). Only one way to find out.
Our comments are stored in a database, so there definitely is a limit of some kind. I guess if the Treaty of Westphalia can be posted, it's more than 65535 characters (216 - 1, a common limit). Only one way to find out.
Our comments are stored in
posted by ssg at 12:54 PM on February 25 [5 favorites]
posted by Wordshore at 12:54 PM on February 25 [4 favorites]
No.
posted by dg at 1:09 PM on February 25 [1 favorite]
posted by dg at 1:09 PM on February 25 [1 favorite]
of course we all know the treaty of Westphallia comment was carved out by Matt.
but using the treaty of Westphalia as a benchmark for how long a comment should be, I suggest a career in the diplomatic corp.
posted by clavdivs at 2:04 PM on February 25 [2 favorites]
but using the treaty of Westphalia as a benchmark for how long a comment should be, I suggest a career in the diplomatic corp.
posted by clavdivs at 2:04 PM on February 25 [2 favorites]
As long as we are unburdened by what has been, no limits.
posted by JohnnyGunn at 3:01 PM on February 25
posted by JohnnyGunn at 3:01 PM on February 25
No.
posted by hototogisu at 3:21 PM on February 25
posted by hototogisu at 3:21 PM on February 25
I realize that a lot of people seem to think the point of MetaTalk is to argue about stuff for days in the most heated fashion possible, but I really just thought, "Huh, this amount of verbiage seems anti-conversational to me." Again, I have seen this happen in every AI thread, and in that context, I have to wonder whether it's actively weaponized text, not even the product of a good faith participant, just grey goo that is intended to kill a conversation. But I also just sometimes see an amount of text that goes on for thousands and thousands of words, which again seems like the opposite of a conversation to me. OTOH, some of those comments are great. On the weird mutant third hand, I'm not sure all of them are that great.
So, whatever. If people want to see replies that go on interminably, live your life, dude, sing your song of hyperglossalia. Have f
posted by kittens for breakfast at 3:25 PM on February 25 [5 favorites]
So, whatever. If people want to see replies that go on interminably, live your life, dude, sing your song of hyperglossalia. Have f
posted by kittens for breakfast at 3:25 PM on February 25 [5 favorites]
I think you should probably post a third comment in your own thread accusing people of bad faith commenting. That will really sell it.
posted by hototogisu at 3:30 PM on February 25 [4 favorites]
posted by hototogisu at 3:30 PM on February 25 [4 favorites]
In all seriousness, what in the world is your problem? This is exactly what I meant in the other thread when I said I wouldn't be a moderator here for any money. The title of the post isn't a statement, it's a question. I am not sure why a question is read as an invitation to argument. I'm sorry this site has taken on such a sour temperament.
posted by kittens for breakfast at 3:48 PM on February 25 [4 favorites]
posted by kittens for breakfast at 3:48 PM on February 25 [4 favorites]
You keep making references to people flooding the zone with shit to kill conversation, though you won’t cite examples, and you’re asking me what my problem is?
posted by hototogisu at 4:07 PM on February 25 [3 favorites]
posted by hototogisu at 4:07 PM on February 25 [3 favorites]
Nobody argued. They answered. And then you got all mad because people answered “no”. None of this was heated.
posted by bowbeacon at 4:08 PM on February 25 [2 favorites]
posted by bowbeacon at 4:08 PM on February 25 [2 favorites]
And nobody is mad at the mods here. Modding this thread is very easy, because it is a calm discussion.
posted by bowbeacon at 4:09 PM on February 25
posted by bowbeacon at 4:09 PM on February 25
Sure. Well, bye.
posted by kittens for breakfast at 4:17 PM on February 25
posted by kittens for breakfast at 4:17 PM on February 25
For what is worth, there are definitely answers on Ask that I find obnoxiously long and I have, on occasion, flagged them as a derail. Certain people seem to be very pleased with the sound of their own text, and can veer far off the course of a helpful answer. So I can see where kittens for breakfast is coming from. I like the idea of collapsing very long comments behind a "read more" break but this thread has made me realize it'll never happen. ¯\_(ツ)_/¯
posted by Jemstar at 4:19 PM on February 25 [4 favorites]
posted by Jemstar at 4:19 PM on February 25 [4 favorites]
For the curious, here's stats on the Treaty of Westphalia
Per this Readability Checker, the Treaty of Westphalia is Very Difficult to Read.
Flesch-Kincaid formula: 31.20
Gunning Fog Index: 22.85
SMOG: 17.76
This one says "189 of 357 sentences are very hard to read."
posted by kirkaracha (staff) at 4:32 PM on February 25 [2 favorites]
Per this Readability Checker, the Treaty of Westphalia is Very Difficult to Read.
Flesch-Kincaid formula: 31.20
Gunning Fog Index: 22.85
SMOG: 17.76
This one says "189 of 357 sentences are very hard to read."
posted by kirkaracha (staff) at 4:32 PM on February 25 [2 favorites]
Mod note: One comment removed at poster's request.
posted by Brandon Blatcher (staff) at 5:51 PM on February 25
posted by Brandon Blatcher (staff) at 5:51 PM on February 25
XCI.
As to Confiscations of Things, which consist in Weight, Number and Measure, Exactions, Concussions and Extortions made during the War; the reclaiming of them is fully annull'd and taken away on the one side and the other, in order to avoid Processes and litigious Strifes.
History repleats itself.
posted by clavdivs at 7:10 PM on February 25
As to Confiscations of Things, which consist in Weight, Number and Measure, Exactions, Concussions and Extortions made during the War; the reclaiming of them is fully annull'd and taken away on the one side and the other, in order to avoid Processes and litigious Strifes.
History repleats itself.
posted by clavdivs at 7:10 PM on February 25
The character limit needs to be at least one bigger than this comment about wilderness survival.
posted by surlyben at 8:33 PM on February 25 [8 favorites]
posted by surlyben at 8:33 PM on February 25 [8 favorites]
Nobody wants to see the 🧵 icon
posted by Fiasco da Gama at 8:58 PM on February 25 [2 favorites]
posted by Fiasco da Gama at 8:58 PM on February 25 [2 favorites]
Character limits are pointless. The only thing they are going to achieve is that people are going to break their posts up into multiple posts. If people have something to say, they are going to say it.
This. And some of our longer winded commenters already tend to do this anyway.
posted by hydropsyche at 4:04 AM on February 26 [2 favorites]
This. And some of our longer winded commenters already tend to do this anyway.
posted by hydropsyche at 4:04 AM on February 26 [2 favorites]
No one is holding a gun to your head and forcing you to read every comment on every post
I routinely scroll past long comments
If I see a ton of favorites on them, I might scroll back up and skim a bit until I find the good stuff
I routinely scroll past a lot of short comments, too, looking for comments with high favorites and/or familiar names
It's so weird to propose prohibiting long comments when you can simply... not read them
posted by Jacqueline at 5:23 AM on February 26 [2 favorites]
I routinely scroll past long comments
If I see a ton of favorites on them, I might scroll back up and skim a bit until I find the good stuff
I routinely scroll past a lot of short comments, too, looking for comments with high favorites and/or familiar names
It's so weird to propose prohibiting long comments when you can simply... not read them
posted by Jacqueline at 5:23 AM on February 26 [2 favorites]
It's so weird to propose prohibiting long comments when you can simply... not read them
Agreed and, if the follow up reasoning is "chatgpt comments" or "pure bad faith comments" then we already have rules against both of those things (right?) so why would a character limit be a rational imposition on otherwise rule-abiding comments?
posted by fennario at 6:27 AM on February 26 [2 favorites]
Agreed and, if the follow up reasoning is "chatgpt comments" or "pure bad faith comments" then we already have rules against both of those things (right?) so why would a character limit be a rational imposition on otherwise rule-abiding comments?
posted by fennario at 6:27 AM on February 26 [2 favorites]
No.
I contemplated my first ever use of Ai by prompting ai with "answer random a question with no using 25,000 words"
posted by chasles at 7:16 AM on February 26
I contemplated my first ever use of Ai by prompting ai with "answer random a question with no using 25,000 words"
posted by chasles at 7:16 AM on February 26
I contemplated my first ever use of Ai by prompting ai with "answer random a question with no using 25,000 words"
it's what kittens for breakfast would have wanted
posted by ginger.beef at 7:34 AM on February 26
it's what kittens for breakfast would have wanted
posted by ginger.beef at 7:34 AM on February 26
Sometimes you just gotta accept that people will do things that are annoying as shit, and you don't need to implement a policy or code change about it.
Source: me @ myself in the three Discord servers I mod trying to determine whether something is actually, genuinely disruptive, or if it just pisses me off personally.
posted by brook horse at 7:43 AM on February 26 [8 favorites]
Source: me @ myself in the three Discord servers I mod trying to determine whether something is actually, genuinely disruptive, or if it just pisses me off personally.
posted by brook horse at 7:43 AM on February 26 [8 favorites]
Sometimes you just gotta accept that people will do things that are annoying as shit, and you don't need to implement a policy or code change about it.
Source: me @ myself in the three Discord servers I mod trying to determine whether something is actually, genuinely disruptive, or if it just pisses me off personally.
Agreed.
Source: me @ myself 1000x daily as the parent of a middle school boy with ADHD.
So many conversations with myself about "Do I need to correct this behavior for big-picture reasons, or just tolerate my own irritation?"
posted by fennario at 8:16 AM on February 26 [9 favorites]
Source: me @ myself in the three Discord servers I mod trying to determine whether something is actually, genuinely disruptive, or if it just pisses me off personally.
Agreed.
Source: me @ myself 1000x daily as the parent of a middle school boy with ADHD.
So many conversations with myself about "Do I need to correct this behavior for big-picture reasons, or just tolerate my own irritation?"
posted by fennario at 8:16 AM on February 26 [9 favorites]
LOL I was just thinking that I learned this from teaching Child Development to undergrads for a year, and how much better my life got when I started asking myself, "Is this actually a problem or are they just 19 years old and annoying, which is their job at that age?" Once I learned to be chill about that I sort of just extrapolated the "is it a problem or just annoying" to everyone else with varying degrees of success.
posted by brook horse at 8:22 AM on February 26 [18 favorites]
posted by brook horse at 8:22 AM on February 26 [18 favorites]
I will plow through some long ass comments but sometimes it's like, "Jesus Christ."
That sounds like a you problem.
I know that sounds glib, but - honestly, some things are actual problems and some things are just things that annoy us personally, you know? I've personally never made a MeTa about people who use the words "veggies" or "birb" or "kiddos" even though those words make me itch. I also wouldn't DREAM of making a MeTa about them because - everyone's different and there are some things that just aren't overall problems but are matters of personal preference. And for the things that are matters of personal preference....I just suck it up, buttercup. If it means I don't read something, oh well. If I miss out as a result, oh well.
This isn't a site problem.
posted by EmpressCallipygos at 8:28 AM on February 26 [4 favorites]
That sounds like a you problem.
I know that sounds glib, but - honestly, some things are actual problems and some things are just things that annoy us personally, you know? I've personally never made a MeTa about people who use the words "veggies" or "birb" or "kiddos" even though those words make me itch. I also wouldn't DREAM of making a MeTa about them because - everyone's different and there are some things that just aren't overall problems but are matters of personal preference. And for the things that are matters of personal preference....I just suck it up, buttercup. If it means I don't read something, oh well. If I miss out as a result, oh well.
This isn't a site problem.
posted by EmpressCallipygos at 8:28 AM on February 26 [4 favorites]
LOL I was just thinking that I learned this from teaching Child Development to undergrads for a year, and how much better my life got when I started asking myself, "Is this actually a problem or are they just 19 years old and annoying, which is their job at that age?" Once I learned to be chill about that I sort of just extrapolated the "is it a problem or just annoying" to everyone else with varying degrees of success.
YES, I am a recovering control freak with a lot of sensory sensitivity and I was worried how that would impact my parenting. I had to be really intentional about learning the skill to look inward and have that conversation with myself. Now it is advice I give to other parents (if asked), to be thoughtful about when you correct and when you just tolerate kids being kids. What I've observed and experienced is that if you over-correct things and make a rule about every little thing you find irritating, then they are more likely to tone you out. Time and place of course, and consideration of others needs, the 'just annoying' stuff that's okay at home is not okay like on a train or in a restaurant or something. I'm derailing but this is a super interesting topic to me and if I don't stop myself now I could write a book-length comment.
posted by fennario at 8:40 AM on February 26 [8 favorites]
YES, I am a recovering control freak with a lot of sensory sensitivity and I was worried how that would impact my parenting. I had to be really intentional about learning the skill to look inward and have that conversation with myself. Now it is advice I give to other parents (if asked), to be thoughtful about when you correct and when you just tolerate kids being kids. What I've observed and experienced is that if you over-correct things and make a rule about every little thing you find irritating, then they are more likely to tone you out. Time and place of course, and consideration of others needs, the 'just annoying' stuff that's okay at home is not okay like on a train or in a restaurant or something. I'm derailing but this is a super interesting topic to me and if I don't stop myself now I could write a book-length comment.
posted by fennario at 8:40 AM on February 26 [8 favorites]
Unlike some others, I don't think the suggestion is unreasonable or that hugely long answers are purely a you problem -- especially if you read on mobile, really long, low content answers can be seriously discouraging to participating in the thread at all.
I don't think a technical solution is the answer, mainly because I don't think it would help, but also because it would impact high value long responses just as much as low value long responses.
If there is a pattern of certain people shitting up AI threads on purpose to stonewall discussion then that should be addressed in the same way that thread-shitting can be addressed when it is just a one-liner -- by asking the specific individuals involved to step away from the thread or threads like it if they can't participate in good faith.
posted by jacquilynne at 8:42 AM on February 26 [8 favorites]
I don't think a technical solution is the answer, mainly because I don't think it would help, but also because it would impact high value long responses just as much as low value long responses.
If there is a pattern of certain people shitting up AI threads on purpose to stonewall discussion then that should be addressed in the same way that thread-shitting can be addressed when it is just a one-liner -- by asking the specific individuals involved to step away from the thread or threads like it if they can't participate in good faith.
posted by jacquilynne at 8:42 AM on February 26 [8 favorites]
BIRBS
:D
my default emotional state is annoyance, so I have to work at keeping it chill as I sail through life. people can be annoying and we all have different thresholds and triggers. walking away can be a deep power move. (I mean, from a thread/comment, not like, buttoning)
posted by supermedusa at 9:08 AM on February 26 [2 favorites]
:D
my default emotional state is annoyance, so I have to work at keeping it chill as I sail through life. people can be annoying and we all have different thresholds and triggers. walking away can be a deep power move. (I mean, from a thread/comment, not like, buttoning)
posted by supermedusa at 9:08 AM on February 26 [2 favorites]
MeFi: recovering control freak
really long, low content answers can be seriously discouraging to participating in the thread at all
Is this happening with sufficient frequency that we need to collectively search our souls about it?
Honestly it's the propensity for short drive-by comments that seems to warrant discussion but even there I'd chalk up a MeTa of that nature as "Someone is having a Bad Day and needs to vent". Which is fine.
posted by ginger.beef at 9:11 AM on February 26
really long, low content answers can be seriously discouraging to participating in the thread at all
Is this happening with sufficient frequency that we need to collectively search our souls about it?
Honestly it's the propensity for short drive-by comments that seems to warrant discussion but even there I'd chalk up a MeTa of that nature as "Someone is having a Bad Day and needs to vent". Which is fine.
posted by ginger.beef at 9:11 AM on February 26
I personally enjoy most of the very long comments that pop up in the AI threads, and generally find them to be informative and made in good faith.
posted by whir at 9:47 AM on February 26
posted by whir at 9:47 AM on February 26
I have composed a truly marvelous comment, which this box is too short to contain.
posted by GenjiandProust at 12:20 PM on February 26 [7 favorites]
posted by GenjiandProust at 12:20 PM on February 26 [7 favorites]
Would you consider posting it in a series of seven volumes over the next couple decades?
posted by ssg at 12:56 PM on February 26 [1 favorite]
posted by ssg at 12:56 PM on February 26 [1 favorite]
The sea was angry that day, my friends - like an old man returning soup at a deli.
posted by kbanas at 1:10 PM on February 26 [7 favorites]
posted by kbanas at 1:10 PM on February 26 [7 favorites]
I think I know some of the comments you're talking about.
On one hand, I do think that sometimes excessive length can be a conversation killer; people don't want to read and respond to all of that, but at the same time the non-threaded nature of MetaFilter means your replies are in the context of all the replies that came before. It can feel weird to just ignore part of the conversation on tl;dr grounds (at least if you're me). There might be an important point in all that.
On the other hand, I think it's lack of editing rather than bad faith, you know? Sometimes conversations won't be perfect and that's fine.
Or here's a way I'd put it, kittens:
If someone is posting in bad faith, then the problem is that they're posting in bad faith, not the length of their comment. If you can show that they're posting in bad faith, you can moderate it on those grounds; you don't need a new policy. If you can't show that they're posting in bad faith, however, that's not a good reason to implement a new policy.
posted by Kutsuwamushi at 2:00 PM on February 26 [3 favorites]
On one hand, I do think that sometimes excessive length can be a conversation killer; people don't want to read and respond to all of that, but at the same time the non-threaded nature of MetaFilter means your replies are in the context of all the replies that came before. It can feel weird to just ignore part of the conversation on tl;dr grounds (at least if you're me). There might be an important point in all that.
On the other hand, I think it's lack of editing rather than bad faith, you know? Sometimes conversations won't be perfect and that's fine.
Or here's a way I'd put it, kittens:
If someone is posting in bad faith, then the problem is that they're posting in bad faith, not the length of their comment. If you can show that they're posting in bad faith, you can moderate it on those grounds; you don't need a new policy. If you can't show that they're posting in bad faith, however, that's not a good reason to implement a new policy.
posted by Kutsuwamushi at 2:00 PM on February 26 [3 favorites]
Take another look at kfb's actual complaint:
Just because your reply has a lot of characters, that doesn't mean you have character, am I right. Okay.
I've noticed a trend towards filibustering commentary here. I myself am prone to the occasional long reply. And look, sometimes we have a lot to say. But when these replies ...
They're not talking about comments in general, they're upset about replies.
More than likely because they got into a back and forth with another user or users who resorted to very long responses and more or less prevailed by dint of sheer mass of verbiage regardless of the actual merits of their arguments.
This Meta is the latest of a long series where the poster got irritated by something another user did and came here to salve their feelings by proposing a general rule against doing that kind of thing.
And it got the usual response.
But what has also been usual in my experience is that the target, named or unnamed (as here), actually does stop doing it even though the complaint is roundly rejected, and so do other people who think they might have been doing it.
So we end up losing some very desirable content we otherwise would have had, and that's why I replied to the complaint with the asperity I did.
posted by jamjam at 2:04 PM on February 26 [5 favorites]
Just because your reply has a lot of characters, that doesn't mean you have character, am I right. Okay.
I've noticed a trend towards filibustering commentary here. I myself am prone to the occasional long reply. And look, sometimes we have a lot to say. But when these replies ...
They're not talking about comments in general, they're upset about replies.
More than likely because they got into a back and forth with another user or users who resorted to very long responses and more or less prevailed by dint of sheer mass of verbiage regardless of the actual merits of their arguments.
This Meta is the latest of a long series where the poster got irritated by something another user did and came here to salve their feelings by proposing a general rule against doing that kind of thing.
And it got the usual response.
But what has also been usual in my experience is that the target, named or unnamed (as here), actually does stop doing it even though the complaint is roundly rejected, and so do other people who think they might have been doing it.
So we end up losing some very desirable content we otherwise would have had, and that's why I replied to the complaint with the asperity I did.
posted by jamjam at 2:04 PM on February 26 [5 favorites]
Does the site need a way for me to collapse comments mid-comment that are really long and that I would prefer not to work hard at scrolling past accurately on mobile? Absolutely, without a doubt, yes.
Does that need to be implemented as a length restriction? No — and that wouldn’t even meet my needs, because some days my 20/400 vision mobile font makes everything y’all say wordy (except the first comment on the post, which I adore).
posted by Callisto Prime at 7:57 PM on February 26 [5 favorites]
Does that need to be implemented as a length restriction? No — and that wouldn’t even meet my needs, because some days my 20/400 vision mobile font makes everything y’all say wordy (except the first comment on the post, which I adore).
posted by Callisto Prime at 7:57 PM on February 26 [5 favorites]
Hard no on this. The entire solution is just to hit page down a couple of times if you don't want to read something. If someone is in a back-and-forth with you and "filibusters" with an overly long reply, you can either continue to engage or just let it go.
Just because you decide to let it go does not mean the other person "wins" somehow.
It might not be a terrible idea to some up with some good solutions to help with scrolling down long threads on mobile, though. Not sure what that might look like - maybe something you could tap to take you to the top of the next comment or something.
posted by flug at 10:37 PM on February 26 [2 favorites]
Just because you decide to let it go does not mean the other person "wins" somehow.
It might not be a terrible idea to some up with some good solutions to help with scrolling down long threads on mobile, though. Not sure what that might look like - maybe something you could tap to take you to the top of the next comment or something.
posted by flug at 10:37 PM on February 26 [2 favorites]
I have seen this happen in every AI thread, and in that context, I have to wonder whether it's actively weaponized text, not even the product of a good faith participant
I'm willing to bet a fairly decent amount of money that this comment, and quite possibly the thread, is specifically directed at my rather long comment here, and many other comments like it in previous AI threads which kittens has angrily decried as being insufficiently anti-AI in one way or another.
Ed Zitron feeds off the anxiety of people in creative media who are scared and angry and quite a bit of that fear and anger are justified, but a lot of it is just based on not actually knowing what is going on in the space, because it is a constantly changing one with an absolutely insane amount of complexity of which I only begin to scratch the surface slightly. Zitron is not a con artist or grifter, not exactly, but he is at the precise halfway point between a demagogue and a charlatan, and if I have to go deep in order to thoroughly dismantle his schtick, then I will go deep. As deep as it takes. Nobody who cares about these things should ever be listening to him.
just grey goo that is intended to kill a conversation
You've already been moderated recently for accusing me of using LLMs to author my comments. I never have, not even once, and in the very rare cases where I have included LLM text to illustrate a point I always mark it clearly and almost always hide it with a details tag so that people who truly hate LLMs don't have to read it. I write very long comments in most AI threads because this is a complex topic, and defusing the wall of FUD surrounding it is a time and labor intensive process. One I fully intend to continue.
Instead of directly stating it once again, this time you are merely hinting that you believe I am using LLMs to author my comments with the words "just grey goo" (a reference to apocalyptic AI-driven nanotech scenarios). This is gross and foul behavior directly against the assume good faith principles of the site, as well as an attempt to skirt further moderation. It is very well known that LLM-detection snake oil unfairly discriminates against autistic individuals like myself, and I am going to be blunt here:
YOU NEED TO KNOCK THIS SHIT OFF AND NEVER EVER DO IT AGAIN.
I'm not a moderator, it's not my job to delete your comments and I wouldn't want them to be in any case because I want this shit on display. It is absolutely discriminatory against neurodivergent people on this site, and I'm sorry that you do not personally possess the knowledge or energy to counter what I am saying in AI threads, or that I am interrupting your desire to just have a big 2-minutes-hate session with your pals, but that is not my problem and it is never an excuse for bigotry. Seriously: knock it off.
posted by Ryvar at 11:52 PM on February 26 [19 favorites]
I'm willing to bet a fairly decent amount of money that this comment, and quite possibly the thread, is specifically directed at my rather long comment here, and many other comments like it in previous AI threads which kittens has angrily decried as being insufficiently anti-AI in one way or another.
Ed Zitron feeds off the anxiety of people in creative media who are scared and angry and quite a bit of that fear and anger are justified, but a lot of it is just based on not actually knowing what is going on in the space, because it is a constantly changing one with an absolutely insane amount of complexity of which I only begin to scratch the surface slightly. Zitron is not a con artist or grifter, not exactly, but he is at the precise halfway point between a demagogue and a charlatan, and if I have to go deep in order to thoroughly dismantle his schtick, then I will go deep. As deep as it takes. Nobody who cares about these things should ever be listening to him.
just grey goo that is intended to kill a conversation
You've already been moderated recently for accusing me of using LLMs to author my comments. I never have, not even once, and in the very rare cases where I have included LLM text to illustrate a point I always mark it clearly and almost always hide it with a details tag so that people who truly hate LLMs don't have to read it. I write very long comments in most AI threads because this is a complex topic, and defusing the wall of FUD surrounding it is a time and labor intensive process. One I fully intend to continue.
Instead of directly stating it once again, this time you are merely hinting that you believe I am using LLMs to author my comments with the words "just grey goo" (a reference to apocalyptic AI-driven nanotech scenarios). This is gross and foul behavior directly against the assume good faith principles of the site, as well as an attempt to skirt further moderation. It is very well known that LLM-detection snake oil unfairly discriminates against autistic individuals like myself, and I am going to be blunt here:
YOU NEED TO KNOCK THIS SHIT OFF AND NEVER EVER DO IT AGAIN.
I'm not a moderator, it's not my job to delete your comments and I wouldn't want them to be in any case because I want this shit on display. It is absolutely discriminatory against neurodivergent people on this site, and I'm sorry that you do not personally possess the knowledge or energy to counter what I am saying in AI threads, or that I am interrupting your desire to just have a big 2-minutes-hate session with your pals, but that is not my problem and it is never an excuse for bigotry. Seriously: knock it off.
posted by Ryvar at 11:52 PM on February 26 [19 favorites]
hating people because they write long comments: bad
hating people because they're autistic: very bad
hating people because they are techbros with a deeply and fundamentally anti-human ideology: 100% fine
posted by adrienneleigh at 1:00 AM on February 27 [7 favorites]
hating people because they're autistic: very bad
hating people because they are techbros with a deeply and fundamentally anti-human ideology: 100% fine
posted by adrienneleigh at 1:00 AM on February 27 [7 favorites]
And that reminds me of a point I’ve brought up repeatedly but keeps getting elided when people try to criticize my comments (which they absolutely should! If I’m wrong I want to know! But you need to know what you’re talking about!): all of this passionate defense of open source AI and hatred of OpenAI and, yeah, the Silicon Valley techbro culture: it is coming from an equally passionate and more-deeply-rooted belief that parity between workers and Capital must be maintained at all costs and this whole subject is an expression of that. I do not want this to be another topic where the Left begins the fight by throwing away its tools on an obscure or even medium-sized point of principle. I want us to survive and then I want us to win.
The difference is that I have - and explain why in exhaustive detail - the belief that the ecological impact is a temporary state of affairs, that the use of training data without consent is a longer term but still impermanent state of affairs. And I don’t know whether communism is any more sustainable than capitalism and whether eventually the social impact aspects, too, will pass: but I do know that if we open our new fight against Capital by granting them monopoly access to the means of production we are truly lost and possibly will remain that way for centuries.
So I refuse to surrender to the panic running through the Twitterati-turned-Bluesky cool kids’ club, I’m on their side and couldn’t care less if they can’t see that as long as they aren’t accusing me of being a robot. Because this shit matters, and I will write as much as it takes for as long as it takes to turn this little corner of thinkers and writers and allies into a place that approaches this subject with knowledge and nuance and the intellectual weapons that will ultimately bring Capital crashing down.
posted by Ryvar at 1:21 AM on February 27 [7 favorites]
The difference is that I have - and explain why in exhaustive detail - the belief that the ecological impact is a temporary state of affairs, that the use of training data without consent is a longer term but still impermanent state of affairs. And I don’t know whether communism is any more sustainable than capitalism and whether eventually the social impact aspects, too, will pass: but I do know that if we open our new fight against Capital by granting them monopoly access to the means of production we are truly lost and possibly will remain that way for centuries.
So I refuse to surrender to the panic running through the Twitterati-turned-Bluesky cool kids’ club, I’m on their side and couldn’t care less if they can’t see that as long as they aren’t accusing me of being a robot. Because this shit matters, and I will write as much as it takes for as long as it takes to turn this little corner of thinkers and writers and allies into a place that approaches this subject with knowledge and nuance and the intellectual weapons that will ultimately bring Capital crashing down.
posted by Ryvar at 1:21 AM on February 27 [7 favorites]
I swear to god. This whole thing is supposed to be about Ryvar's comments?
I know I'm biased here, because Ryvar's commentary in AI threads is like a siren song leading me to my doom, crashing me against the rocky shores of books and concepts I'm too dumb to really understand, all while I'm going "Look ma, I'm swimming, look at me swim!"--but the whole point of Metafilter is supposed to be intelligent people talking about interesting things.
Actually I was about to expand on that last bit, but no, I'm going back to AI. AND I'M GOING TO DO IT AT LENGTH.
When I was a teenager--depressed, constantly at the end of my rope, misunderstood and simultaneously not understanding anybody around me, I found a fat book at the bookstore called Godel, Escher, Bach. I don't know which part of it appealed to me, called out to me--maybe the cover, maybe all the Escher pictures inside--but it was, at that point, the biggest book I'd ever had on my shelves, other than a series of increasingly marked-up Bibles. This book meant the world to me. It expanded my mind. It annoyed everyone I knew. It got garlic butter on the cover, because I took it with me when my family went to Red Lobster, and I excitedly told my parents about how DNA can't be thought of as a cookbook because it is also the kitchen implements and kind of the ingredients-- (Oddly, now that I think of it, this is not my only butter-on-book-cover related story, but I'll save the tale of how my library copy of D'Aulaires book of norse myths survived having an entire stick of butter land on the cover.)
I spent a lot of time on a swing in our yard, which was placed where our old catalpa tree used to live (a tree I could actually climb because it wasn't too high or too difficult, and where I could spend time with the big caterpillars who were a voracious crunching part of the tree's life-cycle), often just staring out into space trying not to think too hard. But I had all my best thoughts in that swing. And GEB went with me--I'd read a page, then zone out, stare over the neighborhood, swing, and think/not-think.
The weird thing was, even though this book was such a touchstone for me, I didn't end up doing anything with it. I gave up on math and science, because even though I was constantly reading popularizations and history of the fields, none of that seemed to translate into making good grades. I tried being an English major in college, but the books wouldn't hold still, and I kept branching out, into feminist and queer theory, into the history of psychoanalysis, whatever could keep my attention for a few minutes. I got mentally sick, like, really sick, and lived in a kind of drugged stasis for a while, and then when I came out of that, I was somehow an adult, and then somehow a person working in a business, with a lot less time for the sort of endless spiraling of interests.
And then here come LLMs. And here come the LLM threads on Metafilter. And like some ex-high-school jock trying to fit back into his jersey, I find my brain lighting up not just with nostalgia--though plenty of that--but with an opportunity to try to learn what's been happening the past 30 years I haven't touched this thing that was so interesting to me as a kid. I still can't manage the math (thanks, Georgia educational system!), but I can't keep myself away from this topic. Or from the topic branching. ChatGPT is not conscious. What might make it conscious? What does consciousness mean? And so, because of these threads, I have gone back to reading about things I love again, things I had kept away from for three decades. Everything has advanced so much. We know so much more about the brain than we did when I was in school. Computers don't look the same. AI is in a whole different place--but its history can still be traced, you can still draw a line between perceptrons and today, and that history is even more interesting today than it was in the late 80s, because so many more doors have opened.
What I am saying, at great length, is that these Metafilter threads have meant something important to me. And part of the reason they are important is the amount of intelligent commentary in them.
We live in an age of AI slop, and it is so insulting to equate this commentary, these threads which are giving me something to think about that I haven't had a chance to think about since I was a kid--with the meaningless drivel we see on Facebook or at the top of our Google searches.
How can anyone even make that comparison?
I for one owe everyone who contributes to those AI threads, a word of thanks. I don't think I've mentioned before just how much this ongoing discussion has meant to me, on a deeply personal level.
posted by mittens at 4:40 AM on February 27 [21 favorites]
I know I'm biased here, because Ryvar's commentary in AI threads is like a siren song leading me to my doom, crashing me against the rocky shores of books and concepts I'm too dumb to really understand, all while I'm going "Look ma, I'm swimming, look at me swim!"--but the whole point of Metafilter is supposed to be intelligent people talking about interesting things.
Actually I was about to expand on that last bit, but no, I'm going back to AI. AND I'M GOING TO DO IT AT LENGTH.
When I was a teenager--depressed, constantly at the end of my rope, misunderstood and simultaneously not understanding anybody around me, I found a fat book at the bookstore called Godel, Escher, Bach. I don't know which part of it appealed to me, called out to me--maybe the cover, maybe all the Escher pictures inside--but it was, at that point, the biggest book I'd ever had on my shelves, other than a series of increasingly marked-up Bibles. This book meant the world to me. It expanded my mind. It annoyed everyone I knew. It got garlic butter on the cover, because I took it with me when my family went to Red Lobster, and I excitedly told my parents about how DNA can't be thought of as a cookbook because it is also the kitchen implements and kind of the ingredients-- (Oddly, now that I think of it, this is not my only butter-on-book-cover related story, but I'll save the tale of how my library copy of D'Aulaires book of norse myths survived having an entire stick of butter land on the cover.)
I spent a lot of time on a swing in our yard, which was placed where our old catalpa tree used to live (a tree I could actually climb because it wasn't too high or too difficult, and where I could spend time with the big caterpillars who were a voracious crunching part of the tree's life-cycle), often just staring out into space trying not to think too hard. But I had all my best thoughts in that swing. And GEB went with me--I'd read a page, then zone out, stare over the neighborhood, swing, and think/not-think.
The weird thing was, even though this book was such a touchstone for me, I didn't end up doing anything with it. I gave up on math and science, because even though I was constantly reading popularizations and history of the fields, none of that seemed to translate into making good grades. I tried being an English major in college, but the books wouldn't hold still, and I kept branching out, into feminist and queer theory, into the history of psychoanalysis, whatever could keep my attention for a few minutes. I got mentally sick, like, really sick, and lived in a kind of drugged stasis for a while, and then when I came out of that, I was somehow an adult, and then somehow a person working in a business, with a lot less time for the sort of endless spiraling of interests.
And then here come LLMs. And here come the LLM threads on Metafilter. And like some ex-high-school jock trying to fit back into his jersey, I find my brain lighting up not just with nostalgia--though plenty of that--but with an opportunity to try to learn what's been happening the past 30 years I haven't touched this thing that was so interesting to me as a kid. I still can't manage the math (thanks, Georgia educational system!), but I can't keep myself away from this topic. Or from the topic branching. ChatGPT is not conscious. What might make it conscious? What does consciousness mean? And so, because of these threads, I have gone back to reading about things I love again, things I had kept away from for three decades. Everything has advanced so much. We know so much more about the brain than we did when I was in school. Computers don't look the same. AI is in a whole different place--but its history can still be traced, you can still draw a line between perceptrons and today, and that history is even more interesting today than it was in the late 80s, because so many more doors have opened.
What I am saying, at great length, is that these Metafilter threads have meant something important to me. And part of the reason they are important is the amount of intelligent commentary in them.
We live in an age of AI slop, and it is so insulting to equate this commentary, these threads which are giving me something to think about that I haven't had a chance to think about since I was a kid--with the meaningless drivel we see on Facebook or at the top of our Google searches.
How can anyone even make that comparison?
I for one owe everyone who contributes to those AI threads, a word of thanks. I don't think I've mentioned before just how much this ongoing discussion has meant to me, on a deeply personal level.
posted by mittens at 4:40 AM on February 27 [21 favorites]
Wait. Wordshore posted an empty comment.
I thought that was not possible (hence the .'s in Obit MeFi threads).
wtf?
posted by Faintdreams at 5:16 AM on February 27 [1 favorite]
I thought that was not possible (hence the .'s in Obit MeFi threads).
wtf?
posted by Faintdreams at 5:16 AM on February 27 [1 favorite]
hating people because they are techbros with a deeply and fundamentally anti-human ideology: 100% fine
is this an example of that uncivil conflict you mentioned in the metatalk queue thread? if it is, no thank you, not useful or healthy
posted by gorbichov at 5:22 AM on February 27 [1 favorite]
is this an example of that uncivil conflict you mentioned in the metatalk queue thread? if it is, no thank you, not useful or healthy
posted by gorbichov at 5:22 AM on February 27 [1 favorite]
posted by box at 5:27 AM on February 27 [2 favorites]
Inspecting the page source shows that Wordshore's comment is not empty, but in fact contains the HTML tag:
</html>
A neat trick!
(Strictly speaking this should cause the rest of the page to be ignored, but browsers wouldn't get very far if they attempted to interpret HTML correctly rather than as a tag soup of vague suggestions)
posted by automatronic at 5:29 AM on February 27 [5 favorites]
</html>
A neat trick!
(Strictly speaking this should cause the rest of the page to be ignored, but browsers wouldn't get very far if they attempted to interpret HTML correctly rather than as a tag soup of vague suggestions)
posted by automatronic at 5:29 AM on February 27 [5 favorites]
I wonder if some of this is the difference between reading/posting mostly on mobile and reading/posting mostly on the desktop. I generally try to be concise, but I sometimes post things on my phone and think "well, that was long" only to see it later on the desktop, and it isn't really. Similarly, Ryvar's first comment above feels "kind of long" on a phone, but looking at it on desktop it's not. Maybe the fact that even a moderately long comment on the phone goes off the end of the screen and, therefore, could be endless while you read it is part of the issue. Alternatively, long comments on the phone are very hard to go back and edit for conciseness and clarity, so maybe the phone is the culprit here as well.
I do skip over long comments if they don't engage me; my time is precious. On a phone, this can be tedious, but no more so than trying to find the point in a longish article. Obviously, other people have different experiences/tolerances, and I don't think the site should be changed to cater to me specifically.
Maybe there is an argument for users being able to block other users, assuming Ryvar is correct about being the specific target of this MeTa, but that's a different argument.
I do think Ryvar has a weirdly hostile and exceedingly wrong about Ed Zitron in ways that suggests that he's not terribly familiar with Zitron's wider writing, but... Zitron is definitely an acquired taste, and I am sure Ryvar also has better things to do with his time than to read things he doesn't enjoy.
TL;DR: I don't think comments should have character limits, but I think people might consider the impact that lengthy comments have, at least some of the time.
posted by GenjiandProust at 5:38 AM on February 27 [2 favorites]
I do skip over long comments if they don't engage me; my time is precious. On a phone, this can be tedious, but no more so than trying to find the point in a longish article. Obviously, other people have different experiences/tolerances, and I don't think the site should be changed to cater to me specifically.
Maybe there is an argument for users being able to block other users, assuming Ryvar is correct about being the specific target of this MeTa, but that's a different argument.
I do think Ryvar has a weirdly hostile and exceedingly wrong about Ed Zitron in ways that suggests that he's not terribly familiar with Zitron's wider writing, but... Zitron is definitely an acquired taste, and I am sure Ryvar also has better things to do with his time than to read things he doesn't enjoy.
TL;DR: I don't think comments should have character limits, but I think people might consider the impact that lengthy comments have, at least some of the time.
posted by GenjiandProust at 5:38 AM on February 27 [2 favorites]
Maybe it would be helpful to put the usernames at the top of the comments instead of the end so one can know right away if they want to skip a comment rather than needing to scroll down to see who wrote it. I personally don’t feel long comments that fall within the current, and evolving, guidelines in place are an issue for me but it could help some folks filter comments out in threads that are causing them distress but they want to read others’ comments.
posted by waving at 6:00 AM on February 27 [1 favorite]
posted by waving at 6:00 AM on February 27 [1 favorite]
FWIW the entire Ed Zitron comment that I remain convinced spawned this thread (because kfb has come at me before with complaints about my AI thread comments' length, and vibe, and a serious accusation of using LLMs to author them) was written on my phone, literally under my bed covers, between 2:30AM and 4:20AM a couple nights ago because I couldn't sleep.
About half of my longest AI thread comments - and there are many - were written entirely on my phone. Even the ones with, like, ten supporting links or whatever. If that sounds exhausting: yes. And on Ed Zitron: it is 100% true I am unfamiliar with any of his writing outside AI but I'm happy to hear he was part of taking down crypto, which is godawful pointless planetary destruction, and it is 100% false that I am unfamiliar with his other writing on AI. I've probably read most of it, and hated nearly all of it: we share a burning hatred for OpenAI but he is determined to get everyone out there to throw the baby (open source machine learning) out with the bathwater (anything touched by Musk/Thiel/Altman), and for reasons that are entirely due to a failure on his part to better educate himself, or at least to engage honestly with what he's learned which has the same result. People I respect and care about - particularly fellow game writers who like to semi-inaccurately include me in their number because I worked on the Bioshock script - keep reading him and it is a lot a lot of work to repeatedly walk them through why he is so wrong on this topic outside of opposition to OpenAI specifically. I really can't stand him.
Also dude needs an editor even more badly than I do.
I found a fat book at the bookstore called Godel, Escher, Bach.
<3
This is where I got my start as well, and like you there was a major interruption from severe mental illness and that's why I dropped out of RPI's cognitive science program with 90% of a CS degree and half a psych degree. That and I wanted to research neural networks in a late 90s AI department run by logicians who just wanted to spend the final decade before they retired continuing to code strong AI in LISP.
And the past few years have been so incredibly validating - so many things I knew were going to be amazing with about five or six orders of magnitude more processing power than was feasible at the time turned out to be just that, a few things I knew would be essential to start building any kind of near-parallels to human reasoning (continuous training, recurrent neural networks, reinforcement learning hybrids, etc.) continue to limit modern systems with their absence.
And those threads, with people like flabdablet coming in to play grouchy tech-grandpa and keeping my sometimes starry-eyed-wonder firmly grounded, and occasionally people doing serious research on neural networks in academia chiming in with insights, and random papers linked by kaibutsu or HearHere that profoundly alter my thinking on the subject - are honestly my favorite part of Metafilter not just now but at any point in the last 22 years. I am not planning to reduce my engagement anytime soon, and not just for the political reasons outlined above.
posted by Ryvar at 6:25 AM on February 27 [7 favorites]
About half of my longest AI thread comments - and there are many - were written entirely on my phone. Even the ones with, like, ten supporting links or whatever. If that sounds exhausting: yes. And on Ed Zitron: it is 100% true I am unfamiliar with any of his writing outside AI but I'm happy to hear he was part of taking down crypto, which is godawful pointless planetary destruction, and it is 100% false that I am unfamiliar with his other writing on AI. I've probably read most of it, and hated nearly all of it: we share a burning hatred for OpenAI but he is determined to get everyone out there to throw the baby (open source machine learning) out with the bathwater (anything touched by Musk/Thiel/Altman), and for reasons that are entirely due to a failure on his part to better educate himself, or at least to engage honestly with what he's learned which has the same result. People I respect and care about - particularly fellow game writers who like to semi-inaccurately include me in their number because I worked on the Bioshock script - keep reading him and it is a lot a lot of work to repeatedly walk them through why he is so wrong on this topic outside of opposition to OpenAI specifically. I really can't stand him.
Also dude needs an editor even more badly than I do.
I found a fat book at the bookstore called Godel, Escher, Bach.
<3
This is where I got my start as well, and like you there was a major interruption from severe mental illness and that's why I dropped out of RPI's cognitive science program with 90% of a CS degree and half a psych degree. That and I wanted to research neural networks in a late 90s AI department run by logicians who just wanted to spend the final decade before they retired continuing to code strong AI in LISP.
And the past few years have been so incredibly validating - so many things I knew were going to be amazing with about five or six orders of magnitude more processing power than was feasible at the time turned out to be just that, a few things I knew would be essential to start building any kind of near-parallels to human reasoning (continuous training, recurrent neural networks, reinforcement learning hybrids, etc.) continue to limit modern systems with their absence.
And those threads, with people like flabdablet coming in to play grouchy tech-grandpa and keeping my sometimes starry-eyed-wonder firmly grounded, and occasionally people doing serious research on neural networks in academia chiming in with insights, and random papers linked by kaibutsu or HearHere that profoundly alter my thinking on the subject - are honestly my favorite part of Metafilter not just now but at any point in the last 22 years. I am not planning to reduce my engagement anytime soon, and not just for the political reasons outlined above.
posted by Ryvar at 6:25 AM on February 27 [7 favorites]
Good god I’m on a phone and when I clicked on Ryvar’s comment it only took me four swipes to get past it. That’s hardly anything in terms of long comments. Then I went back up and read it and it and every word was thoughtful and clearly original and adding to the conversation whether you agreed with the points or not. If you don’t have the attention span to read a comment like that, damn I get it, but I think you can probably swipe past it. This isn’t do you love the color of the sky over here.
Honestly, I had the thought that this might be referencing Ryvar, but then went, “Surely not. KFB wouldn’t be making such a bad faith argument just because they don’t like the conclusions of his deep dive into a topic he’s passionate about. He’s so obviously a real person that’s trying to earnestly engage on a complex topic, no one would seriously accuse him of writing his posts with ChatGPT or doing it to kill the conversation. This must be about some other phenomenon I’ve missed because I’ve been pretty checked out of Metafilter lately.”
Very disappointed to find out my initial instinct was probably correct. And now thinking there but the grace of god go I because lord knows I write some long-ass jargon-filled posts about autism, but because people like my conclusions I skate past the robot accusations. Probably if they heard me speaking it out loud the flat affect would get me, though.
posted by brook horse at 6:48 AM on February 27 [14 favorites]
Honestly, I had the thought that this might be referencing Ryvar, but then went, “Surely not. KFB wouldn’t be making such a bad faith argument just because they don’t like the conclusions of his deep dive into a topic he’s passionate about. He’s so obviously a real person that’s trying to earnestly engage on a complex topic, no one would seriously accuse him of writing his posts with ChatGPT or doing it to kill the conversation. This must be about some other phenomenon I’ve missed because I’ve been pretty checked out of Metafilter lately.”
Very disappointed to find out my initial instinct was probably correct. And now thinking there but the grace of god go I because lord knows I write some long-ass jargon-filled posts about autism, but because people like my conclusions I skate past the robot accusations. Probably if they heard me speaking it out loud the flat affect would get me, though.
posted by brook horse at 6:48 AM on February 27 [14 favorites]
Wait why am I saying “just because they don’t like the conclusions.” The admission here is not even getting to the conclusion before dipping. So I guess it’s just not liking the sense of being monologued at. Unfortunately that’s a necessary prerequisite of spaces that don’t run out autistic folks on sight.
posted by brook horse at 6:54 AM on February 27 [5 favorites]
posted by brook horse at 6:54 AM on February 27 [5 favorites]
Ryvar, I hadn't read the original thread so didn't see your comment in it, but based on what you wrote here, and also mittens and the history and story they shared, both of you are writing the kind of comments that I am hear to read and learn from. Stuff that doesn't necessarily align to my interests, definitely doesn't align to my talents or my own knowledge base, but are specifically what I like about MeFi. Smart comments from people with different knowledge bases and experience and interest that expose me to ways of thinking and analyzing things that I wouldn't otherwise necessarily run into. The rest of the internet is so algorithm driven and so click-bait focused that nuanced, individualized, deep takes on stuff that the algorithm isn't pushing at me, well I wouldn't see it necessarily, and it's not always stuff in my awareness enough to seek out.
I said NO firmly above to the idea of of a character limit, but I am echoing it again because it's an even stronger NO if this is the kind of content a character limit is supposed to be eliminating.
posted by fennario at 7:16 AM on February 27 [3 favorites]
I said NO firmly above to the idea of of a character limit, but I am echoing it again because it's an even stronger NO if this is the kind of content a character limit is supposed to be eliminating.
posted by fennario at 7:16 AM on February 27 [3 favorites]
Just reiterating that I don't find long comments an issue (and definitely have written my own) and I'm on a phone 90% of the time I'm on here.
Obviously I've written my own as well. Sometimes it's hard to be succinct, esp. if you feel (as I often do) that personal context is a good thing. The times I've posted without context are sometimes the comments I feel worst about.
posted by warriorqueen at 7:18 AM on February 27 [2 favorites]
Obviously I've written my own as well. Sometimes it's hard to be succinct, esp. if you feel (as I often do) that personal context is a good thing. The times I've posted without context are sometimes the comments I feel worst about.
posted by warriorqueen at 7:18 AM on February 27 [2 favorites]
Here's a Javascript bookmarklet that will hide all comments longer than 1000 characters behind a details tag.
posted by bowbeacon at 7:49 AM on February 27 [6 favorites]
long code that is too wide for the page.
javascript:(function()%7Bvar%20messages%20%3D%20document.querySelectorAll(%22.comments%22)%3B%0A%20%20%20%20for%20(var%20i%20%3D%200%3B%20i%20%3C%20messages.length%3B%20i%2B%2B)%20%7B%0A%20%20%20%20%20%20%20%20if%20(messages%5Bi%5D.textContent.length%20%3E%201000)%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20messages%5Bi%5D.innerHTML%20%3D%20%22%3Cdetails%3E%3Csummary%3ELong%20Comment%3C%2Fsummary%3E%22%20%2B%20messages%5Bi%5D.innerHTML%20%2B%20%22%3C%2Fdetails%3E%22%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%7D)()%3B
posted by bowbeacon at 7:49 AM on February 27 [6 favorites]
I also basically exclusively write my forum posts from my phone and my teachers are getting thousand word essays for 1 point questions sometimes, so I guess the tendency to reply at significant length for the enjoyment of doing so translates well enough!
posted by Callisto Prime at 7:57 AM on February 27
posted by Callisto Prime at 7:57 AM on February 27
Just because your reply has a lot of characters, that doesn't mean you have character, am I right. Okay.
No, you are not. Too long? Don't read. A lovely, elegant, decades old, non-technical solution to a non-problem.
posted by pdb at 8:09 AM on February 27 [1 favorite]
No, you are not. Too long? Don't read. A lovely, elegant, decades old, non-technical solution to a non-problem.
posted by pdb at 8:09 AM on February 27 [1 favorite]
waving suggested:
But.
I have, several times, read an interesting, edifying, empathetic comment, one I'm glad I read, and then gotten to the end and been surprised by the username -- because the author was someone I had started writing off.
That's something I appreciate about MeFi's design choice. In face-to-face conversation I wouldn't have this opportunity, to consider the words before I even find out who said them, to get that first impression before my pre-existing biases weigh in. Online, in most textual spaces, I still don't get that chance, because in email, on most social media, in group chat, and in most other fora, the byline displays before or alongside the start of the post.
This design choice is one of the things that makes MetaFilter special.
posted by brainwane at 8:51 AM on February 27 [24 favorites]
Maybe it would be helpful to put the usernames at the top of the comments instead of the end so one can know right away if they want to skip a comment rather than needing to scroll down to see who wrote it.I get it. Sometimes a particular MeFite just gets on one's last nerve and one wants to avoid what they say altogether! And that's why one of the user scripts people use (like Mute-a-Filter) is to reduce or remove the display of comments by certain users.
But.
I have, several times, read an interesting, edifying, empathetic comment, one I'm glad I read, and then gotten to the end and been surprised by the username -- because the author was someone I had started writing off.
That's something I appreciate about MeFi's design choice. In face-to-face conversation I wouldn't have this opportunity, to consider the words before I even find out who said them, to get that first impression before my pre-existing biases weigh in. Online, in most textual spaces, I still don't get that chance, because in email, on most social media, in group chat, and in most other fora, the byline displays before or alongside the start of the post.
This design choice is one of the things that makes MetaFilter special.
posted by brainwane at 8:51 AM on February 27 [24 favorites]
100% to what brainwane posted
Anything that helps keep an open mind
posted by ginger.beef at 8:55 AM on February 27 [1 favorite]
Anything that helps keep an open mind
posted by ginger.beef at 8:55 AM on February 27 [1 favorite]
I feel fortunate to read long comments and LIKE IT. No limits. Thank you mittens and Ryvar for your excellent long comments. You have my favourites!
posted by a humble nudibranch at 11:45 AM on February 27 [2 favorites]
posted by a humble nudibranch at 11:45 AM on February 27 [2 favorites]
Maybe it would be helpful to put the usernames at the top of the comments instead of the end so one can know right away if they want to skip a comment rather than needing to scroll down to see who wrote it.
I’m not sure there is anyone on MetaFilter right now who I would mute (and one or two people I might have in the past whose writing I enjoy now), so I’m not sure I would use that function if available. Similarly, if I skip a long comment, it’s more because I don’t care about the topic enough, of I’m short on time, or I’m not in the mood or whatever. I guess I might be influenced by the poster a little (we are all bores on some topics, I imagine), but it would more be the length and the moment, and you can’t put the length at the top of the comment, can you?
posted by GenjiandProust at 12:59 PM on February 27 [2 favorites]
I’m not sure there is anyone on MetaFilter right now who I would mute (and one or two people I might have in the past whose writing I enjoy now), so I’m not sure I would use that function if available. Similarly, if I skip a long comment, it’s more because I don’t care about the topic enough, of I’m short on time, or I’m not in the mood or whatever. I guess I might be influenced by the poster a little (we are all bores on some topics, I imagine), but it would more be the length and the moment, and you can’t put the length at the top of the comment, can you?
posted by GenjiandProust at 12:59 PM on February 27 [2 favorites]
Re: not putting the user's name at the top- I doubt I'm the only one who likes to make a game of guessing who wrote a comment based on content, tone, phrasing, etc. (I'm not presenting that as policy argument vis the position of a user's name, it's just an aside.)
posted by Larry David Syndrome at 3:33 PM on February 27 [4 favorites]
posted by Larry David Syndrome at 3:33 PM on February 27 [4 favorites]
is this an example of that uncivil conflict you mentioned in the metatalk queue thread? if it is, no thank you, not useful or healthy
Which part is it exactly that you think is "uncivil"?
posted by adrienneleigh at 3:36 PM on February 27
Which part is it exactly that you think is "uncivil"?
posted by adrienneleigh at 3:36 PM on February 27
Putting names at the top of a post is not a great idea, in my personal opinion. While I get that longer comments can be annoying, especially when you're not sure who it is, part of the DNA of the site is that we focus more on what people are writing as opposed to who they are.
posted by Brandon Blatcher (staff) at 3:40 PM on February 27 [4 favorites]
posted by Brandon Blatcher (staff) at 3:40 PM on February 27 [4 favorites]
otoh, putting the wordcount at the top of comments seems like a perfectly reasonable thing to do and should be easy as pie on the new site. I can probably even submit a patch to do that!
posted by adrienneleigh at 4:09 PM on February 27 [1 favorite]
posted by adrienneleigh at 4:09 PM on February 27 [1 favorite]
putting the wordcount at the top of comments
Can we have a word counter in the comment box as well, so we can watch the number go up and up?
posted by mittens at 4:35 PM on February 27
Can we have a word counter in the comment box as well, so we can watch the number go up and up?
posted by mittens at 4:35 PM on February 27
New number go brrr game dropped
posted by brook horse at 4:44 PM on February 27 [4 favorites]
posted by brook horse at 4:44 PM on February 27 [4 favorites]
And that’s the story of how I got caught using LLMs to edit every one of my comments down to exactly 1337 words.
posted by Ryvar at 5:26 PM on February 27 [6 favorites]
posted by Ryvar at 5:26 PM on February 27 [6 favorites]
Certainly! As an LLM, I cannot truly understand human existence, but there is a significant amount of literature and art that indicates that stories are an important part of "being human." I will help you expand this story to a comfortable length, ensuring that the appropriate details are included to maximize comprehension.
How about:
I have a funny story to tell about my experience with LLMs. LLM means Large Language Model. These are machine learning models that use natural language processes to generate text. They can do many things, such as answer questions, edit writing, or help people brainstorm new ideas. In this case, I wanted to use an LLM for a specific purpose. If you input text into an LLM, you can ask it to edit this text along any number of parameters. You may want to change the style or tone, or turn it into a limerick. Or you may want to check for grammar, or even translate it into another language.
In this case I wanted the LLM to fit my comment to a precise word count. I chose 1337 because it was a tongue-in-cheek reference to "leet" speak, which has cultural and social significance to many internet users, but primarily those in the hacker space. It was originally found in bulletin board systems, or BBS. It was particularly popular on a BBS for the Cult of the Dead Cow. Leet speak is characterized by the use of numbers and symbols which replace key letters in place of typical characters. This was used to evade detection when discussing cracking and hacking, which are illegal activities that should not be attempted.
Leet speak eventually entered more mainstream communities for its aesthetic, as it was viewed as a way to signal that a user was "cool" or "edgy." However, it has lost a significant amount of its popularity, and is considered "cringe" or "old school" by most modern internet users. If I wanted to make a nod to leet without explicitly using it in my post, I could edit it to precisely 1337 words. But this would be a tedious process by hand, especially over the course of many comments. Using an LLM to edit to this word count was much more efficient, and exactly the kind of tasks these models were designed for. While there may be some errors, as LLMs do not check their output and have been known to produce text that does not follow instructions, I knew that I could check its work in a word counting program, such as Wordcounter.net. Then, if the LLM produced text that was under or over 1337 words, I could ask it to try again until it produced text appropriate to the prompt.
While this appeared to be a foolproof plan, I failed to take into account another problem: remembering to take out the LLMs response to my prompt, which is a clear giveaway when posting LLM generated content. Because human memory is fallible, I inevitably posted a comment that included this telltale sign of LLM use. It was quickly pointed out by other forum users, who produced pages of documents proving that I had used an LLM to edit my comments. Rather than appreciating my clever gimmick, they were furious at being "deceived" into thinking all of my work was original. This taught me a valuable lesson about transparency, particularly in small communities that rely on trust and close personal connections.
I hope this expansion of your thrilling story is true to your vision for bringing a subtle reference to leet into your Metafilter posting. Please let me know if you would like me to make any changes to the text, or if you would like me to edit more stories for you. Remember, I'm here to help.
posted by brook horse at 6:02 PM on February 27 [7 favorites]
How about:
I have a funny story to tell about my experience with LLMs. LLM means Large Language Model. These are machine learning models that use natural language processes to generate text. They can do many things, such as answer questions, edit writing, or help people brainstorm new ideas. In this case, I wanted to use an LLM for a specific purpose. If you input text into an LLM, you can ask it to edit this text along any number of parameters. You may want to change the style or tone, or turn it into a limerick. Or you may want to check for grammar, or even translate it into another language.
In this case I wanted the LLM to fit my comment to a precise word count. I chose 1337 because it was a tongue-in-cheek reference to "leet" speak, which has cultural and social significance to many internet users, but primarily those in the hacker space. It was originally found in bulletin board systems, or BBS. It was particularly popular on a BBS for the Cult of the Dead Cow. Leet speak is characterized by the use of numbers and symbols which replace key letters in place of typical characters. This was used to evade detection when discussing cracking and hacking, which are illegal activities that should not be attempted.
Leet speak eventually entered more mainstream communities for its aesthetic, as it was viewed as a way to signal that a user was "cool" or "edgy." However, it has lost a significant amount of its popularity, and is considered "cringe" or "old school" by most modern internet users. If I wanted to make a nod to leet without explicitly using it in my post, I could edit it to precisely 1337 words. But this would be a tedious process by hand, especially over the course of many comments. Using an LLM to edit to this word count was much more efficient, and exactly the kind of tasks these models were designed for. While there may be some errors, as LLMs do not check their output and have been known to produce text that does not follow instructions, I knew that I could check its work in a word counting program, such as Wordcounter.net. Then, if the LLM produced text that was under or over 1337 words, I could ask it to try again until it produced text appropriate to the prompt.
While this appeared to be a foolproof plan, I failed to take into account another problem: remembering to take out the LLMs response to my prompt, which is a clear giveaway when posting LLM generated content. Because human memory is fallible, I inevitably posted a comment that included this telltale sign of LLM use. It was quickly pointed out by other forum users, who produced pages of documents proving that I had used an LLM to edit my comments. Rather than appreciating my clever gimmick, they were furious at being "deceived" into thinking all of my work was original. This taught me a valuable lesson about transparency, particularly in small communities that rely on trust and close personal connections.
I hope this expansion of your thrilling story is true to your vision for bringing a subtle reference to leet into your Metafilter posting. Please let me know if you would like me to make any changes to the text, or if you would like me to edit more stories for you. Remember, I'm here to help.
posted by brook horse at 6:02 PM on February 27 [7 favorites]
Oh, Brook Horse, you've truly outdone yourself. This AI-generated comment is the equivalent of microwaving a filet mignon and calling it haute cuisine. It’s a soggy, room-temperature attempt at discourse, the textual equivalent of an unseasoned boiled chicken breast.
Let’s be clear: AI-generated comments are not allowed. Not in the “wink, wink, who’s going to notice” way, but in the actual rule sense. This isn’t a friendly suggestion; it’s a site-wide policy designed to preserve the integrity of a human-driven community. MetaFilter isn’t Reddit. It isn’t Twitter. It sure as hell isn’t a beta testing ground for auto-generated content that reads like someone shook a bag of Scrabble tiles and called it a day.
Beyond the sheer rule-breaking, the vibe is off. AI doesn’t get MetaFilter. It doesn’t understand the layered in-jokes, the decades of culture, the way a user can drop a subtle reference and have ten people instantly pick it up and run with it. AI can’t riff, it can’t banter, it can’t engage in anything beyond regurgitated approximations of human thought. And yet, here we are, being forced to sift through this off-brand grocery store knockoff of a real comment.
MetaFilter deserves better. We deserve better. And, frankly, you should know better.
posted by Diskeater at 6:28 PM on February 27 [2 favorites]
Let’s be clear: AI-generated comments are not allowed. Not in the “wink, wink, who’s going to notice” way, but in the actual rule sense. This isn’t a friendly suggestion; it’s a site-wide policy designed to preserve the integrity of a human-driven community. MetaFilter isn’t Reddit. It isn’t Twitter. It sure as hell isn’t a beta testing ground for auto-generated content that reads like someone shook a bag of Scrabble tiles and called it a day.
Beyond the sheer rule-breaking, the vibe is off. AI doesn’t get MetaFilter. It doesn’t understand the layered in-jokes, the decades of culture, the way a user can drop a subtle reference and have ten people instantly pick it up and run with it. AI can’t riff, it can’t banter, it can’t engage in anything beyond regurgitated approximations of human thought. And yet, here we are, being forced to sift through this off-brand grocery store knockoff of a real comment.
MetaFilter deserves better. We deserve better. And, frankly, you should know better.
posted by Diskeater at 6:28 PM on February 27 [2 favorites]
"remembering to take out the LLMs response to my prompt, which is a clear giveaway...."
has a grammatical error (ought to be "LLM's") which, to me, is an indicator that brook horse wrote that comment themselves, as a human.
brook horse, am I right?
posted by brainwane at 6:35 PM on February 27 [4 favorites]
has a grammatical error (ought to be "LLM's") which, to me, is an indicator that brook horse wrote that comment themselves, as a human.
brook horse, am I right?
posted by brainwane at 6:35 PM on February 27 [4 favorites]
Diskeater, I really hope your comment is genuine and not another layer of joke that I’m missing, because if so I’m ridiculously proud that my 100% original writing nailed the ChatGPT voice so hard. All of the online AI checkers told me my text was 100% human generated, which was quite a blow to my creative writing ego. I thought I hadn’t put enough effort in to pull it off but decided I’d post the effort anyway. Sorry if it caused you genuine distress though! I really thought it would be obviously creative fiction!
has a grammatical error (ought to be "LLM's") which, to me, is an indicator that brook horse wrote that comment themselves, as a human.
Quick and clever Mefites might have also noticed that I initially wrote “human memory is infallible” before quickly editing it on comment re-read. Which is to say, yes, every word was written by hand. I then accidentally smashed said hand into my desk. Not sure if that adds to the art or not.
posted by brook horse at 6:39 PM on February 27 [10 favorites]
has a grammatical error (ought to be "LLM's") which, to me, is an indicator that brook horse wrote that comment themselves, as a human.
Quick and clever Mefites might have also noticed that I initially wrote “human memory is infallible” before quickly editing it on comment re-read. Which is to say, yes, every word was written by hand. I then accidentally smashed said hand into my desk. Not sure if that adds to the art or not.
posted by brook horse at 6:39 PM on February 27 [10 favorites]
(oh god please let diskeater's comment be deepseek specifically, i'm workin' on a THEORY here)
posted by mittens at 6:46 PM on February 27
posted by mittens at 6:46 PM on February 27
yall this all sucks
posted by glonous keming at 6:51 PM on February 27
posted by glonous keming at 6:51 PM on February 27
For what it's worth, brook horse, I am... rather experienced with generative text and was 110% fooled (had a whole "oh please don't, because that's really funny, but..." moment and everything).
Good show.
And for anyone confused, the new policy for generative text on Metafilter can be found here, Brandon said he'll be updating the FAQ this weekend (and if brook horse had used an LLM for their reply, it would've run afoul on both the thread purpose and labeling requirements)
posted by Ryvar at 6:54 PM on February 27 [1 favorite]
Good show.
And for anyone confused, the new policy for generative text on Metafilter can be found here, Brandon said he'll be updating the FAQ this weekend (and if brook horse had used an LLM for their reply, it would've run afoul on both the thread purpose and labeling requirements)
posted by Ryvar at 6:54 PM on February 27 [1 favorite]
waiiittt
Is MeFi sufficiently perverse that a significant number of MeFites are going to start impersonating LLM output
because if so, what glonous keming said
posted by ginger.beef at 7:01 PM on February 27
Is MeFi sufficiently perverse that a significant number of MeFites are going to start impersonating LLM output
because if so, what glonous keming said
posted by ginger.beef at 7:01 PM on February 27
Y’all I’m so sorry. I thought maybe it would fool people in the first half but was sure by the end it would be clear it was written jokingly by a human. I had a number of lines that I thought would give it away. Though in retrospect if someone read a few lines and then noped out because it sounded like LLM garbage, that’s totally fair.
posted by brook horse at 7:14 PM on February 27 [5 favorites]
posted by brook horse at 7:14 PM on February 27 [5 favorites]
This design choice [to have usernames at the bottom of the comment] is one of the things that makes MetaFilter special.
You’re right but not because it represents any principle of humility or egalitarianism - it’s because it creates the game of trying to identify the author before you get to the username.
posted by atoxyl at 11:24 PM on February 27 [3 favorites]
You’re right but not because it represents any principle of humility or egalitarianism - it’s because it creates the game of trying to identify the author before you get to the username.
posted by atoxyl at 11:24 PM on February 27 [3 favorites]
Is MeFi sufficiently perverse that a significant number of MeFites are going to start impersonating LLM output
mu+haha
posted by clavdivs at 11:24 PM on February 27 [2 favorites]
mu+haha
posted by clavdivs at 11:24 PM on February 27 [2 favorites]
The newly implemented rule appears to create an environment prone to unnecessary controversy. It is likely to encourage self-proclaimed investigators to scrutinize and accuse others of generating AI-produced content or being language models themselves. Additionally, some individuals may engage in deceptive behavior by deliberately mimicking AI-generated responses, while others may share AI-generated content without proper attribution, further complicating the situation.
posted by TheophileEscargot at 1:14 AM on February 28
posted by TheophileEscargot at 1:14 AM on February 28
otoh, putting the wordcount at the top of comments seems like a perfectly reasonable thing to do and should be easy as pie on the new site. I can probably even submit a patch to do that!
MeFiMail kirkaracha and he'll add you to the developer list!
posted by Brandon Blatcher (staff) at 2:11 AM on February 28
MeFiMail kirkaracha and he'll add you to the developer list!
posted by Brandon Blatcher (staff) at 2:11 AM on February 28
brook horse, i thought your comment was funny and yes, obviously written by a person imitating llm output. or a person who told an llm to imitate a person imitating llm output...
posted by Kutsuwamushi at 5:00 AM on February 28 [1 favorite]
posted by Kutsuwamushi at 5:00 AM on February 28 [1 favorite]
Which part is it exactly that you think is "uncivil"?
It's possible I misread or misunderstood your comment, and if so, I'm sorry. it sounded to me like you were saying it was ok to "hate" Ryvar under the assumption that he's some sort of techbro. . . which from what little I know/have read doesn't seem to be an accurate take. I hope I read that wrong, and if I did, it's definitely on me for not following the "assume good intentions" guideline. I have been dismayed overall at the tone and timbre of conversations here recently, and I probably just need to step away from it. Everyone needs a hug, indeed.
posted by gorbichov at 5:28 AM on February 28 [1 favorite]
It's possible I misread or misunderstood your comment, and if so, I'm sorry. it sounded to me like you were saying it was ok to "hate" Ryvar under the assumption that he's some sort of techbro. . . which from what little I know/have read doesn't seem to be an accurate take. I hope I read that wrong, and if I did, it's definitely on me for not following the "assume good intentions" guideline. I have been dismayed overall at the tone and timbre of conversations here recently, and I probably just need to step away from it. Everyone needs a hug, indeed.
posted by gorbichov at 5:28 AM on February 28 [1 favorite]
Do you think it isn't civil to hate someone for their political and social beliefs?
posted by bowbeacon at 6:37 AM on February 28 [1 favorite]
posted by bowbeacon at 6:37 AM on February 28 [1 favorite]
I’d say it’s uncivil to willfully mischaracterize someone’s political and social beliefs, but I think some folks on here disagree on whether that’s happening or not.
posted by brook horse at 6:43 AM on February 28 [3 favorites]
posted by brook horse at 6:43 AM on February 28 [3 favorites]
Do you think it isn't civil to hate someone for their political and social beliefs?
Not necessarily. I think it's uncivil to cast those kinds of aspersions without basis, if that's what was happening. I read those AI comments and never thought "here's someone who's all-in on this shit," I thought "here's someone who want to dig deeper into implications and possible outcomes." So if the comment I was responding to was intended to classify the original poster as a techbro and therefore hateful, I disagree, and by extension, thought it was uncivil. But like I said, perhaps I misread.
I will also say that text is SUCH an imperfect/flawed medium for these kinds of discussions. It's so easy to read tone incorrectly, and geez that seems to be happening so much more these days on MetaTalk. Which is also why I offered my mea culpa, and my "everyone needs a hug."
posted by gorbichov at 8:20 AM on February 28 [1 favorite]
Not necessarily. I think it's uncivil to cast those kinds of aspersions without basis, if that's what was happening. I read those AI comments and never thought "here's someone who's all-in on this shit," I thought "here's someone who want to dig deeper into implications and possible outcomes." So if the comment I was responding to was intended to classify the original poster as a techbro and therefore hateful, I disagree, and by extension, thought it was uncivil. But like I said, perhaps I misread.
I will also say that text is SUCH an imperfect/flawed medium for these kinds of discussions. It's so easy to read tone incorrectly, and geez that seems to be happening so much more these days on MetaTalk. Which is also why I offered my mea culpa, and my "everyone needs a hug."
posted by gorbichov at 8:20 AM on February 28 [1 favorite]
adrienneleigh can come on in and clarify their statement
I will say, I at no time interpreted their statement as a veiled accusation that Ryvar is coming from the perspective of all-in on shitty tech-bro fascism or whatever string of negative shit
just as one datapoint to counter the possible interpretation of yours, gorbichov. There is always MeMail if it matters enough for someone to seek or provide clarification.
I don't do social media outside of MetaFilter, it's precisely the constraints of text only and just giving up on other mediums that has led me to this decision. Agreed, meaning and intent can be challenging to discern at times.
posted by ginger.beef at 8:33 AM on February 28 [1 favorite]
I will say, I at no time interpreted their statement as a veiled accusation that Ryvar is coming from the perspective of all-in on shitty tech-bro fascism or whatever string of negative shit
just as one datapoint to counter the possible interpretation of yours, gorbichov. There is always MeMail if it matters enough for someone to seek or provide clarification.
I don't do social media outside of MetaFilter, it's precisely the constraints of text only and just giving up on other mediums that has led me to this decision. Agreed, meaning and intent can be challenging to discern at times.
posted by ginger.beef at 8:33 AM on February 28 [1 favorite]
To add a datapoint, I did read the comment as an accusation that Ryvar was bringing the 'techbro' perspective.
If I try to unpack why I read it that way and why I think that is a rational way to read it, that would be as follows. Of course we aren't always doing this kind of analysis in real time, my initial instinct was a gut instinct, but when I go back and second guess it to analyze whether it was a rational gut instinct, I have cues to think that it was.
hating people because they write long comments: bad
hating people because they're autistic: very bad
hating people because they are techbros with a deeply and fundamentally anti-human ideology: 100% fine
To me, that first point's frame of reference is this thread in this community and it appears to respond to the scope and subject of the immediately-preceding comment by Ryvar which was defending the writing of long comments.
The second point also referred back to comment because Ryvar self identified as autistic in that same immediately-preceding comment .
So I have two cues of interpretation that adrienneleigh is talking about the comment immediately prior to theirs.
I have zero cues of interpretation that adrienneleigh has departed from talking about that comment and is talking generally about "tech bros" in the way we should understand it generally to reflect a certain type of person with harmful political and social beliefs.
It's still totally possible that that interpretation (the general one) was what was in adrienneleigh's heart.
But I hope you can see why someone might rationally not have read it that way.
posted by fennario at 9:04 AM on February 28 [2 favorites]
If I try to unpack why I read it that way and why I think that is a rational way to read it, that would be as follows. Of course we aren't always doing this kind of analysis in real time, my initial instinct was a gut instinct, but when I go back and second guess it to analyze whether it was a rational gut instinct, I have cues to think that it was.
hating people because they write long comments: bad
hating people because they're autistic: very bad
hating people because they are techbros with a deeply and fundamentally anti-human ideology: 100% fine
To me, that first point's frame of reference is this thread in this community and it appears to respond to the scope and subject of the immediately-preceding comment by Ryvar which was defending the writing of long comments.
The second point also referred back to comment because Ryvar self identified as autistic in that same immediately-preceding comment .
So I have two cues of interpretation that adrienneleigh is talking about the comment immediately prior to theirs.
I have zero cues of interpretation that adrienneleigh has departed from talking about that comment and is talking generally about "tech bros" in the way we should understand it generally to reflect a certain type of person with harmful political and social beliefs.
It's still totally possible that that interpretation (the general one) was what was in adrienneleigh's heart.
But I hope you can see why someone might rationally not have read it that way.
posted by fennario at 9:04 AM on February 28 [2 favorites]
well I died on my own pissy sword moments ago on the blue so I'm inclined to try to be a good and open-to-kindness person for the next several minutes at least, and read all comments accordingly
posted by ginger.beef at 9:29 AM on February 28 [1 favorite]
posted by ginger.beef at 9:29 AM on February 28 [1 favorite]
I did read the comment as an accusation that Ryvar was bringing the 'techbro' perspective.
Speaking as Ryvar: I wasn’t sure whether that third line was referring to me or not, hence the tiny intersectional anarcho-communist manifesto that followed, just so there was no confusion as to whether or not I’m a tech bro.
And some full disclosure, FWIW: I did at one point in 2016 spend seven 100 hour weeks working on a Disney/NASA/Nvidia joint VR project very much rooted in the valley. And within 24 hours of landing in SFO Elon Musk managed to completely pointlessly and needlessly fuck over my team (details are still under like three layers of NDA), so I’ve always hated him even while he was still a media darling. Just literally no reason at all.
Three days of that were spent working very closely with Jensen Huang (nvidia CEO) directly under an insane pressure/deadline situation throughout which he was incredibly professional and respectful in addition to being intimidatingly informed on all things technical in nature; basically the exact opposite of Steve Jobs in the runup to an Apple keynotes, plus well-beyond Woz on the tech side. So biases are that billionaires shouldn’t exist but Jensen is a class act, Elon has always been a ratfucking tool, and politically I am about as far from a Valley bro as it is possible to get. Hope that clears up any confusion.
Oh, and that project eventually became the basis for the Artemis project’s astronaut VR training sim, and I got to drive the Mars Rover prototype around the backlot of Johnson Space Center for five minutes as a sort of thank-you. I have a lot of disagreement with the technical implementation particulars of how my work was rolled into that, though, so slightly mixed feelings about the whole thing.
posted by Ryvar at 10:10 AM on February 28 [11 favorites]
Speaking as Ryvar: I wasn’t sure whether that third line was referring to me or not, hence the tiny intersectional anarcho-communist manifesto that followed, just so there was no confusion as to whether or not I’m a tech bro.
And some full disclosure, FWIW: I did at one point in 2016 spend seven 100 hour weeks working on a Disney/NASA/Nvidia joint VR project very much rooted in the valley. And within 24 hours of landing in SFO Elon Musk managed to completely pointlessly and needlessly fuck over my team (details are still under like three layers of NDA), so I’ve always hated him even while he was still a media darling. Just literally no reason at all.
Three days of that were spent working very closely with Jensen Huang (nvidia CEO) directly under an insane pressure/deadline situation throughout which he was incredibly professional and respectful in addition to being intimidatingly informed on all things technical in nature; basically the exact opposite of Steve Jobs in the runup to an Apple keynotes, plus well-beyond Woz on the tech side. So biases are that billionaires shouldn’t exist but Jensen is a class act, Elon has always been a ratfucking tool, and politically I am about as far from a Valley bro as it is possible to get. Hope that clears up any confusion.
Oh, and that project eventually became the basis for the Artemis project’s astronaut VR training sim, and I got to drive the Mars Rover prototype around the backlot of Johnson Space Center for five minutes as a sort of thank-you. I have a lot of disagreement with the technical implementation particulars of how my work was rolled into that, though, so slightly mixed feelings about the whole thing.
posted by Ryvar at 10:10 AM on February 28 [11 favorites]
I'm definitely aware that you don't think of yourself as a techbro with an anti-human ideology, Ryvar. Whether i think you are one goes back and forth depending on the day, frankly, but i do accept that you believe yourself to be on the side of labor against capital, and of humans against the forces of anti-humanity. I hope that stays true!
My statement was, however, definitely intended to be more generally about people who bring AI slop (labeled or not; i understand there's a rough consensus, but i flatly disagree that labeling the AI slop makes it acceptable) and defense of AI slop, rather than specifically a vaguepost about you.
posted by adrienneleigh at 10:33 AM on February 28 [3 favorites]
My statement was, however, definitely intended to be more generally about people who bring AI slop (labeled or not; i understand there's a rough consensus, but i flatly disagree that labeling the AI slop makes it acceptable) and defense of AI slop, rather than specifically a vaguepost about you.
posted by adrienneleigh at 10:33 AM on February 28 [3 favorites]
Also, Brandon, i already have developer access; i just haven't done anything with it. (I've been specifically meaning to go through and audit the models in the alpha version to try to make sure the database is normalized and conforms to best practices, but i've also been dealing with crippling depression.)
posted by adrienneleigh at 10:45 AM on February 28 [2 favorites]
posted by adrienneleigh at 10:45 AM on February 28 [2 favorites]
I'm truly sorry about that depression, that shit gets hella hard. For me, I tend to think of it like a water wave, in the sense that no matter how large it is, eventually it ends. So it's a matter of (in my circumstances) learning to surf it.
But that's just me, don't mean to talk over or belittle your circumstances, just offering something from my own battles that may help yours.
posted by Brandon Blatcher (staff) at 10:56 AM on February 28 [3 favorites]
But that's just me, don't mean to talk over or belittle your circumstances, just offering something from my own battles that may help yours.
posted by Brandon Blatcher (staff) at 10:56 AM on February 28 [3 favorites]
(at some point when i am not facing a mountain of work emails that must be answered in my own inimitable human voice, i am going to argue with adrienneleigh about whether (a) ai text = ai slop and (b) posting ai text is definitionally anti-human, or if, rather, our inherent interest in mimesis means that posting AI text is the most human possible response. the comment will be seven thousand words long. i will later be banned.)
posted by mittens at 10:56 AM on February 28 [3 favorites]
posted by mittens at 10:56 AM on February 28 [3 favorites]
Whether i think you are one goes back and forth depending on the day, frankly, but i do accept that you believe yourself to be on the side of labor against capital, and of humans against the forces of anti-humanity. I hope that stays true!
That’s perfectly understandable coming from someone who is as opposed to all this as you are, but consistently approaches the subject with intellectual honesty. I realize a lot of people find the autistic “Yes. Your feelings make sense. [turns and walks away]” unsatisfying, but it’s a compliment.
Also also: I was a total piece of shit human when I joined the site 22 years ago, unambiguously so, and so it has always felt vaguely inappropriate for me to expect any immediate benefit of the doubt from people (Jessamyn told me at the 25th anniversary last summer that I was her pick for “most improved Mefite” and it took every ounce of self control I had to not immediately burst into tears. Because fucking hell I have been trying to kill off that person in every way possible for the past two decades).
the comment will be seven thousand words long. i will later be banned
Long, very loud laughter. Thank you, I needed that, and my neighbors now think I’m crazy.
posted by Ryvar at 11:43 AM on February 28 [11 favorites]
That’s perfectly understandable coming from someone who is as opposed to all this as you are, but consistently approaches the subject with intellectual honesty. I realize a lot of people find the autistic “Yes. Your feelings make sense. [turns and walks away]” unsatisfying, but it’s a compliment.
Also also: I was a total piece of shit human when I joined the site 22 years ago, unambiguously so, and so it has always felt vaguely inappropriate for me to expect any immediate benefit of the doubt from people (Jessamyn told me at the 25th anniversary last summer that I was her pick for “most improved Mefite” and it took every ounce of self control I had to not immediately burst into tears. Because fucking hell I have been trying to kill off that person in every way possible for the past two decades).
the comment will be seven thousand words long. i will later be banned
Long, very loud laughter. Thank you, I needed that, and my neighbors now think I’m crazy.
posted by Ryvar at 11:43 AM on February 28 [11 favorites]
Number of front page posts by an individual user should absolutely be limited, especially w/in the first 2 years of membership.
I personally enjoy a long comment, though, if the quality is there.
posted by reedbird_hill at 6:06 AM on March 1
I personally enjoy a long comment, though, if the quality is there.
posted by reedbird_hill at 6:06 AM on March 1
From a quick glance at the database migrations for the new site, it looks like comments are set up as the
There are other places the length could be limited, but the size of the field in the database is a pretty hard limit.
posted by jimw at 12:09 PM on March 1
text
type, which I believe is limited to 64k characters if MySQL/MariaDB is being used. That could be changed to a longtext
, which is 4 GB. (The length of the text type is unlimited with Postgres.)There are other places the length could be limited, but the size of the field in the database is a pretty hard limit.
posted by jimw at 12:09 PM on March 1
Number of front page posts by an individual user should absolutely be limited, especially w/in the first 2 years of membership.
For real, or am I missing sarcasm? As a new user, this feels incredibly unwelcoming. What am I on probation? What I find interesting or valid is less worthy than someone else? I mean if someone posts an FPP that is bad, people will certainly tell them, or if it breaks guidelines, it will be moderated. Two years sounds so ridiculously excessive though that I'm thinking my sarcasm detector must be broken.
posted by fennario at 12:54 PM on March 1 [12 favorites]
For real, or am I missing sarcasm? As a new user, this feels incredibly unwelcoming. What am I on probation? What I find interesting or valid is less worthy than someone else? I mean if someone posts an FPP that is bad, people will certainly tell them, or if it breaks guidelines, it will be moderated. Two years sounds so ridiculously excessive though that I'm thinking my sarcasm detector must be broken.
posted by fennario at 12:54 PM on March 1 [12 favorites]
It's sarcasm, you're good. FPP away.
posted by Diskeater at 4:11 PM on March 1 [2 favorites]
posted by Diskeater at 4:11 PM on March 1 [2 favorites]
Does the site need a way for me to collapse comments mid-comment that are really long and that I would prefer not to work hard at scrolling past accurately on mobile?
Doing a page search for posted by moves you to just before the next comment.
Also I'm mobile now but IIRC there are built in keyboard short cuts to take you to the next comment on desktop browsers. I think it's J or K.
posted by Mitheral at 5:44 PM on March 1 [1 favorite]
Doing a page search for posted by moves you to just before the next comment.
Also I'm mobile now but IIRC there are built in keyboard short cuts to take you to the next comment on desktop browsers. I think it's J or K.
posted by Mitheral at 5:44 PM on March 1 [1 favorite]
Here's the list, it's also on my profile page:
Desktop keyboard shortcuts for Metafilter
j - scroll down
. - scroll down
k - scroll up
, - scroll up
m - show more inline comments
posted by Brandon Blatcher (staff) at 3:41 AM on March 2 [4 favorites]
Desktop keyboard shortcuts for Metafilter
j - scroll down
. - scroll down
k - scroll up
, - scroll up
m - show more inline comments
posted by Brandon Blatcher (staff) at 3:41 AM on March 2 [4 favorites]
You are not logged in, either login or create an account to post comments
posted by sagc at 5:52 AM on February 25 [40 favorites]