please consider content warnings for AI interaction May 12, 2023 6:15 AM   Subscribe

i would like to respectfully ask the community to please consider placing some textual warnings in FPP text when linking directly to websites that will lead to interaction with chatGPT or similar machine-learning systems.

if the link is in the FPP text, it is easily clickable without seeing the MetaFilter tags that are inside the post so i myself, and perhaps others too, would greatly appreciate if people making links to these sorts of things on the front page place a little bit of warning text, content warning, trigger warning, or the like around links to things that directly interact with AI constructs.

the post that drove me to write this was this one about a site called goblin.tools, which uses chatGPT or something as its backend. the post had one single MeFi tags for AI, which was only visible from inside the post. as a result i had clicked on the goblin.tools link and unknown to me i interacted with chatGPT, which is something I never, ever, ever, ever wanted to do. i am fairly upset and i hope to never experience this interaction again, which is why i ask the community to please consider this request.
posted by glonous keming to Etiquette/Policy at 6:15 AM (195 comments total) 12 users marked this as a favorite

In the same way that folks used to add [SLYT] to "Single Links to YouTube," I would be very happy to see [AI] after a link to an AI-driven web site.
posted by wenestvedt at 6:37 AM on May 12, 2023 [13 favorites]


Mod note: Comment removed. Please treat the request seriously and respectfully or refrain from posting, thank you.
posted by Brandon Blatcher (staff) at 6:59 AM on May 12, 2023 [6 favorites]


Agreed.

But also, since there are always going to be things that people forget to point out or don't know or agree that they should, whether in posts or in comments, I try to look at other comments before clicking through to sites I don't know.
posted by trig at 7:03 AM on May 12, 2023 [2 favorites]


I too would appreciate if we (as a community) tried to adopt / encourage this as the norm.
posted by Faintdreams at 7:48 AM on May 12, 2023 [1 favorite]


I am fine with this, but I'm curious and I hope it's ok to ask: What is the potential issue here? Like is AI phobia a thing or are there workplaces that don't want people clicking through to AI or is there an ethical stance that some people are choosing to avoid AI for some reason?

Again, apologies if the question is disrespectful. I do not mean it to be. I'm happy to observe the boundary if I ever have occasion to. I'm just curious about the origin of the issue. I am not questioning the validity of the issue, which is obviously not for me to decide -- you can set whatever boundary you like for whatever reason you like.
posted by If only I had a penguin... at 7:56 AM on May 12, 2023 [40 favorites]


It is likely that any text entered into an LLM is retained by the owner of that LLM (OpenAI, Google, etc.) ostensibly for training or improving the model. So even if you're not concerned about privacy issues like personal info leaking into corporate data pools, simply interacting with an LLM means you are providing it with additional source material. So if you have an ethical problem with the way that LLMs are being built (scraping the internet, etc.) or the many other problematic aspects of the tech, then you may not want to interact with a site build with that toolchain. That's just one example.

I think in the short term, making it customary to add a little ("[AI]") CW in the post seems fine, but within a few years it may be impossible to know, as the tech becomes ubiquitous, amorphous, and the already frayed label of "AI" oozes into all aspects of our lives...
posted by gwint at 9:33 AM on May 12, 2023 [26 favorites]


I'd be pleased to see that as well, though I agree we're maybe already at the point where AI is already involved in a lot of web content in ways that may be harder to detect and not obvious to flag. But when one does know, it would be a nice courtesy.

Mostly I just wanted to note, if we're doing a round of "taking the community's temperature on this stuff", that a couple of times recently I saw and flagged "I asked ChatGPT to answer this for you..." answers in Ask and they were promptly deleted. I really appreciated that - thanks for being responsive to that stuff, mods!
posted by Stacey at 9:42 AM on May 12, 2023 [15 favorites]


Thanks for the explanation gwint.
posted by If only I had a penguin... at 9:43 AM on May 12, 2023


Plus one for this request.
posted by cocoagirl at 9:49 AM on May 12, 2023 [1 favorite]


Minus one for this request.
posted by Ahmad Khani at 10:02 AM on May 12, 2023 [27 favorites]


I don’t really have a problem with encouraging people to mention that their link is a wrapper around [Product X from Company Y, which you might have opinions about] but

within a few years it may be impossible to know

it’s already unknowable - any Google search since 2019 has been processed by a transformer model - and “AI or not” is a basically misleading distinction, and anything you put into any field online might be retained (and might be scraped by somebody else for training data) and that last part you already know.

It seems like the biggest really distinct thing that people find upsetting is when generated answers to a query are presented without any transparency or credit to the original sources of information. I don’t know exactly what to do about this but I’m sympathetic there and it makes sense that it’s a sore point on this site given what it does and the history with Google. But black boxes and data exploitation and hidden biases - that’s just the continuation of the past fifteen years of the Internet, in many ways even on a technical level, and I don’t think it can be addressed on the level of “is this AI or not?”
posted by atoxyl at 10:07 AM on May 12, 2023 [19 favorites]


within a few years it may be impossible to know, as the tech becomes ubiquitous, amorphous, and the already frayed label of "AI" oozes into all aspects of our lives...

Agreed. It’s a bit like adding a Javascript CW tag right before Web 2.0 became a thing, but it’s a pretty easy request for now.

unknown to me i interacted with chatGPT, which is something I never, ever, ever, ever wanted to do

I guarantee you it wasn’t the first time, not by a long shot, and also that in another two or three years using the Internet without inadvertent LLM interaction will be functionally impossible (to the extent that isn’t already the case). You may want to consider how to face that impending change at some point. But that doesn’t alter the fact that we should respect people’s preferences until it’s an utterly moot point.

Also, if anyone who wants to talk about their concerns with this - whether you’re looking for someone to just listen or there’s something you want explained …until one of the actual practicing cognitive science people shows up and offers I’m happy to lend an ear or my understanding over MeMail. I’m onsides with artificial neural network-based technology in the longterm, but I’m extremely familiar with the limitations and problematic aspects and I don’t dismiss concerns about either.
posted by Ryvar at 10:12 AM on May 12, 2023 [7 favorites]


Minus one for this request.

Was this really necessary? The poster is asking people to think about doing a thing, not demanding a reconciliation commission be stood up to throw people off MetaFilter forever if they ever link to any AI ever anywhere. If you don't want to add a warning to your posts, then just... don't.
posted by Etrigan at 10:27 AM on May 12, 2023 [34 favorites]


What is the potential issue here?

For one thing, the "hallucinations" of LLMs mean that sometimes they just....make shit up. Witness my own recent AskMe when I asked about whether ChatGPT could do my vacation research.

And for another, I want to know if the linked-to piece was written by a living person, who has biases and agendas and who even might engage with me as a reader...or just is the output of some piece of software that was triggered with an input and generated some output. That is, could this be a conversation, or am I looking at something static?
posted by wenestvedt at 10:34 AM on May 12, 2023 [5 favorites]


If you don't want to add a warning to your posts, then just... don't.

Metafilter moderation works on "read the room". If enough people say they want something, then it becomes a de facto rule. I am generally in favor of doing this because it seems to bug folks - and because I have no interest in posting such content anyway - but I don't think it should become a rule. For me, that means I need to post my opposition to avoid adding another implicit rule to Metafilter's (unwritten) de facto "read the room" rules book.
posted by saeculorum at 10:41 AM on May 12, 2023 [38 favorites]


Also - I honestly don't know how to comply with this request. I don't know any major website that doesn't use some form of ML for ads targeting, ads personalization, content ranking, server load balancing, etc. And yes - every input you have to the website, even without ChatGPT or the huge LLMs you see in the news, is being stored, and used by the website owner to refine their ML models. There really isn't any major website exists anymore where you are not interacting with some form of AI just by going to the website and browsing it.
posted by saeculorum at 10:53 AM on May 12, 2023 [12 favorites]


I think "trigger warning" and "content warning" are making the request a little confusing to me. I certainly have no problem with posts being clear that they're about AI toys and tools, along the same lines that someone might use SLYT. I love this latter tag as it keeps me from accidentally clicking videos when I think I'm about to read an article.

But there's really a particular...ah...implication with trigger warnings and content warnings, that the thing you're about to encounter is unmistakably bad and potentially harmful. So I'm not clear if we're being asked as a site to make that value judgment about these AI things? I would certainly argue against doing so.
posted by mittens at 11:10 AM on May 12, 2023 [19 favorites]


Metafilter moderation works on "read the room". If enough people say they want something, then it becomes a de facto rule.

You and I have very different views of how moderation works here. As an example, “[slyt]” is a thing. It’s not enforced, it just became a norm.

The poster didn’t ask for it to become a “de facto rule” and in fact tried really, really hard to make it obvious that they’re asking posters to consider it rather than it becoming a rule. But, as pretty much always happens, there’s absolutely no way to couch such a request without someone claiming that it’s a demand for a new entry in the MetaFilter Codex Of Ironclad Laws.
posted by Etrigan at 11:14 AM on May 12, 2023 [5 favorites]


>Minus one for this request.

Was this really necessary?


It’s a little glib, but with this kind of MeTa, people should be able to express disagreement. I would have appreciated a reason, though.

For me, I don't have strong feelings one way or the other for myself; I dont live ChatGPT content much, and i find generated comments are getting really old really quick, but no so much as to ask for a warning. But, since it definitely seems to upset some members, it is a simple enough ask, and I think we should try to do it as much as we can under the “let’s all be good neighbors” principal.
posted by GenjiandProust at 11:17 AM on May 12, 2023 [9 favorites]


I’m not against tagging AI stuff, and I am for having clear rules about posting LLM generated material (e.g., not using it to answer askmes). I do think it is pretty different than existing content warnings commonly used here. Like, someone knows when they are posting a YouTube link, or something discussing abuse, etc. I anticipate knowing that things use AI in some way will be increasingly difficult. I think there is also a difference between clicking through and being exposed to something (like sexy stuff on your work computer) and clicking through and then deciding to interact with something, like the post in question here. Anyone serious about not interacting with an AI would have the chance to read more before doing it. As things progress, they would probably want to do some research before entering any text anywhere on the web that they don’t already know.
posted by snofoam at 11:24 AM on May 12, 2023


I don't think it is too controversial to ask that posters always be clear as to what they are linking to. There's always been an objection here to so-called 'mystery meat' posts with obscure links. This is an expansion of that general policy, I think.

It seems like the biggest really distinct thing that people find upsetting is when generated answers to a query are presented without any transparency or credit to the original sources of information.

This is how I would frame it for myself. If a site is purporting to provide information then what is the source of that information? User-submitted content? Math equations? Is it clear enough that we can all judge for ourselves whether that information is reliable?

Generally we dissuade people from posting links to, say, the Daily Mail because of their politics, sure, but also because their publish-first, ask questions later attitude to facts. It is unreliable content. In the case of AI, we can at least start by tagging it that way.
posted by vacapinta at 11:25 AM on May 12, 2023 [5 favorites]


I'm of two minds. I believe it's always good to hear from people what would make them more comfortable interacting with the site and the links. On the surface this sounds reasonable.

In practice, I don't really know how to comply with this request exactly. I think there is some agreement in general that posting has slowed down and that's not a good thing. I've tried to post more FPPs lately although not super lately. But the list of requests and rules seems to grow and with this one, I'm not exactly sure how I would check.

I get that this gamification-as-education (I think; I only clicked and looked briefly) is build on an AI platform and will be used for learning. If that's even the right terminology. But I'm not really sure how to vet anything I link to. What if someone used chatGPT to draft something, does that count as interacting? What if an article has a MidJourney image on the top of it?

Anyways - it seems more and more like the amount of research and understanding of site conventions needed to post a FPP and not get a bit of pushback over it is becoming increasingly complex.
posted by warriorqueen at 11:52 AM on May 12, 2023 [9 favorites]


I feel strongly that LLM-generated texts should not be permitted, because we expect to interact with humans here and because there is zero responsibility in an LLM for what it generates.(*)

I don't see, though, how we can be expected to suss out a mechanism that in stronger or weaker forms is behind a substantial portion of Internet interactions these days.

(*) Today I challenged Bard about a factual claim it made about the end of Blakes 7:
"Yes, Avon ran away from the battle on Gauda Prime. In the final episode of Blake's 7, "Blake's 7: Redemption", Avon and the remaining crew of the Liberator are ambushed by the Federation on Gauda Prime. The crew is killed, but Avon is seen running away from the battle. It is not clear whether Avon survived or not.

Some fans believe that Avon did survive. They point to the fact that he was seen running away from the battle, and that he is a very resourceful character who would be likely to find a way to escape. Others believe that Avon died in the battle. They point to the fact that he was surrounded by Federation troops, and that it would have been very difficult for him to escape.

The truth of the matter is that Terry Nation, the creator of Blake's 7, never explicitly stated whether Avon survived or not. This has led to much speculation and debate among fans. Ultimately, it is up to each individual fan to decide what they believe happened to Avon.

Here is a transcript of the scene where Avon runs away from the battle:

* * *

**Avon:** This is it.

**Tarrant:** Avon, what are you doing?

**Avon:** I'm getting out of here.

**Tarrant:** But we can still fight!

**Avon:** We're outnumbered and outgunned. We're not going to win.

**Tarrant:** We have to try!

**Avon:** There's no point. This is suicide.

**Tarrant:** I'm not going to give up!

**Avon:** Suit yourself. But I'm not going to die here.

**Tarrant:** Avon!

* * *

Avon then runs away from the battle, and is not seen again."
IT MADE UP A WHOLE TRANSCRIPT, Y'ALL.
posted by praemunire at 12:26 PM on May 12, 2023 [14 favorites]


For one thing, the "hallucinations" of LLMs mean that sometimes they just....make shit up.

Yes. ChatGPT insists that I died several years ago, though the date and means of death keeps changing. Friends and colleagues have, on asking it, also been told about my death. Or, 'death'.

Some screenshots on Flickr. Here, I had passed away in 2018 ... from "choking on a piece of cake". But on correcting ChatGPT and getting an apology, then next asking, I had died again but this time in 2021.

I wish to disappoint my enemies, a few of which are here on MetaFilter, by stating that at the moment of typing this, I am very much alive.
posted by Wordshore at 12:32 PM on May 12, 2023 [38 favorites]


I don't think we can expect people to know what technology is powering an interactive site, for the same reason glonous keming didn't realize what they were playing with. I thought the OP did a good job making clear the link was interactive.
posted by michaelh at 12:40 PM on May 12, 2023 [3 favorites]


Wordshore, at least you pretend-died doing what you loved!
posted by chesty_a_arthur at 12:57 PM on May 12, 2023 [15 favorites]


Wordshore, at least you pretend-died doing what you loved!

What, did ChatGPCheese claim that an enormous wheel of cheddar flattened him?
posted by wenestvedt at 1:06 PM on May 12, 2023 [20 favorites]


I feel strongly that LLM-generated texts should not be permitted

Unattributed, surely. But that's a whole other ball of wax. The specific request here is about a CW when linking to sites that in some way function using AI.
posted by gwint at 1:30 PM on May 12, 2023 [1 favorite]


It’s fine for people to disagree with other people’s suggestions. That’s how norms get hashed out. Given the comment deletion for not being respectful of the suggestion, simply saying “I do not agree with this” seems like a reasonable approach to registering disagreement without causing offense.
posted by Mid at 1:41 PM on May 12, 2023 [4 favorites]


a couple of times recently I saw and flagged "I asked ChatGPT to answer this for you..." answers in Ask and they were promptly deleted

Is this against the site rules in some way? I occasionally see “this was my Google search strategy to arrive at an answer” replies to Ask’s that I really appreciate because it illuminates a search strategy that would have eluded me, and I can see the same potential value in “here’s how I used chatgpt to find an answer” replies.
posted by not just everyday big moggies at 2:01 PM on May 12, 2023




I think this would be polite and a good idea for the current online environment. As things flux, Mefi can adjust. Right now, there are very few ethical guidelines that folks commonly agree on in regards to AI/LLMetc. So, much like how it’s nice when people indicate a link goes to a paywalled article or site, or a link contains violent or disturbing imagery, and sometimes the poster or the community adds more details like a link to a version that isn’t behind a paywall or descriptions of the disturbing content, we can do this, too.

In a few years I think we will have a better idea of the ways these tools can be used to aid and to exploit. I think it is safe to say that Mefi will err on the side of exploitation being bad and that it should at least be notated in a post here. For now, that indicator can be used more liberally.
posted by Mizu at 2:53 PM on May 12, 2023 [3 favorites]


I vote no unless there’s a reason beyond “I don’t want to interact with anything AI”.
posted by Diskeater at 3:36 PM on May 12, 2023 [9 favorites]


I don't think this is a good idea.

The Google search prompt has been backed by machine learning algorithms for at least 20 years. Bing is the same.

In 2017 my final project for a machine learning class was to gauge consumer sentiment for Nike based on a Twitter feed. Needless to say there were a lot of other feeds and topics I could have chosen. The data is all there.

In short, we are long past the time that AI became embedded throughout the internet. Recent advancements have been cramming that down people's throat, but if you hope to avoid interacting with AI you are at least 20 years late.

Putting an AI warning on some sites might lead people to believe that other sites are AI-free, and I think that would be counterproductive. Better to make clear to people that AI will be subsuming everything they type anywhere.
posted by Tell Me No Lies at 3:38 PM on May 12, 2023 [26 favorites]


OMG Wordshore, I'm dying that your Contact Me page is blank. I love you.
posted by QuakerMel at 5:27 PM on May 12, 2023 [4 favorites]


Oh, I suppose I should remind people that all the words ever written to Metafilter, including these ones right here, are being used to train LLMs.
posted by Tell Me No Lies at 6:17 PM on May 12, 2023 [11 favorites]


Some screenshots on Flickr. Here, I had passed away in 2018 ... from "choking on a piece of cake". But on correcting ChatGPT and getting an apology, then next asking, I had died again but this time in 2021.

If I were you I would be thinking of ways to take advantage of your not-alive status in the future. Renting out your identity to fugitives from the techno-police-state, perhaps.
posted by atoxyl at 7:20 PM on May 12, 2023 [1 favorite]


If people want to put "[AI]" on their posts, fine — post what you what!

I can't get behind the idea of requiring posters to do this, either through "hard" means (mod enforcement) or "soft" (community standards). There are several reasons:
  • It's impossible to know what the developers of a site may have used behind the scenes. Requiring an "this site uses AI" disclaimer would be fundamentally the same as requiring a "this site uses Java" disclaimer. It's impossible to know.
  • Where's the line? What we're loosely calling "AI" in this thread — ChatGPT, GPT3/4, Bard, etc — are perhaps more accurately called "LLMs", or "Large Language Models". As the name might imply, the thing that makes them different from previous language models is that they're... large. There's not a lot that's fundamentally different from previous generations of AI, at least technically. Yes, of course, they're substantially spookier than previous generations, but that's an incremental change, not something substantially different from what we had in years past. So then the question is - how big is too big? If a site uses a Markov chain does that require a disclaimer? Where, exactly, does a language model become "large" enough to requiring disclaiming? Is it just "any model made after mid-2022" or is there a more precise definition?
  • What about other forms of AI? Are image generation tools (Stable Diffusion, Midjourney, etc) OK? How about previous versions of AI-assisted image generation like content-aware fill? Is the fact that the later is built into Photoshop but the former isn't relevant? What happens when Adobe integrates diffusion models into their software? Does anything that used Photoshop now require disclaiming?
  • As others have noted, within a few years these things are going to be used pretty much everywhere. In particular: it seems pretty clear to me at this point that LLMs are going to change software development substantially. Developers who know how to use LLMs effectively can be shockingly more productive -- I've myself seen, no joke, at least a 2x speed-up in most of the work I do when using CoPilot vs not. Pretty soon now, much of the software we interact with is going to be written in part by programmers using AI assistive technology like CoPilot or ChatGPT. Avoiding interacting with AIs is going to very soon mean avoiding computers entirely. That's totally a choice people can make, but it's not one I think is OK to impose on others.
Once again, I have no problem with people who want to put whatever tags or disclaimers or anything into their posts. Post whatever you want! But I really hope that some sort of "[AI]" tag doesn't become any sort of requirement.
posted by dorothy hawk at 7:21 PM on May 12, 2023 [29 favorites]


As the name might imply, the thing that makes them different from previous language models is that they're... large. There's not a lot that's fundamentally different from previous generations of AI, at least technically

It seems like the current generation of language models is defined by the introduction of the transformer model and its attention mechanism, and the discovery that scaling of such models, in terms of parameters but even more so in terms of training data, continues to improve their performance well beyond what many people expected. So that goes back to 2017-2018. And then you've got stuff like RLHF that makes them better at interacting with people in a helpful way. But yeah, of course that paradigm is built on ideas that go back much further.
posted by atoxyl at 7:31 PM on May 12, 2023 [3 favorites]


Sigh I knew even as I wrote that that someone would show up to correct my oversimplification. Thanks.

Anyway, the point's the same. What's the line at which something requires a content warning? So it's not the size, is it the transformer architecture, specifically, that makes something spooky enough to require a warning? My point is that there isn't a definable line. There is, of course, a point at which generative models become "real" enough that they creep people out, but it's a personal line. The OP gets "upset" by interacting with ChatGPT and I'm not disputing that. It's a real feeling. But it's not one based on any sort of line I can figure out how we'd ever be able to clearly draw.
posted by dorothy hawk at 8:20 PM on May 12, 2023 [3 favorites]


If there isn't a hard and fast rule by which to decide who goes up in front of the firing squad and who doesn't, it's wrong to suggest that posters try to accommodate people's preferences. After all, someone could make an honest mistake and then have their entire life ruined.
posted by tigrrrlily at 8:26 PM on May 12, 2023 [2 favorites]


I can't speak for anyone else, but for me - if a link relies on text that is generated by LLMs, I am probably not interested in it and would rather know that in advance.

Obviously, yes, I get that if I use social media and if I use Google and if I live as much of my life on the internet as I do, I can't avoid interacting with AI. But with LLM-generated text, at worst it's just wrong, and at best it's missing the things that make me interested in reading a link - a good writing style, a unique viewpoint, the voice of a person, genuine expertise. The Goblin.tools Magic To-Do, for instance - it's not worse than WikiHow, but it's also not better; for almost anything I can imagine using it for, I would really want to read a book or read an article or watch a YouTube video because a bare list of steps is not helpful without an experienced person showing you what to keep an eye out for.

Not a huge deal, I think, but my own personal preference. And I think there is a qualitative difference between using AI to generate images or computer code versus text - text is too likely to misinform. (Although once AI art gets better, and we can generate photorealistic images of politicians and celebrities riding lawnmowers on the moon that can't be distinguished from real photographs, we'll probably need to have that conversation about AI-generated art and misinformation.)
posted by Jeanne at 8:45 PM on May 12, 2023 [2 favorites]


If there isn't a hard and fast rule by which to decide who goes up in front of the firing squad and who doesn't, it's wrong to suggest that posters try to accommodate people's preferences. After all, someone could make an honest mistake and then have their entire life ruined.

If by ruined you mean ‘likely to become the target of a pointlessly acrimonious Metatalk thread’, then yes.
posted by Tell Me No Lies at 9:01 PM on May 12, 2023 [3 favorites]


Sigh I knew even as I wrote that that someone would show up to correct my oversimplification. Thanks.

Wasn’t really trying to do that, sorry, just expanding on the history and taking my own shot at a definition of the state of the art. I completely agree with your underlying point that it’s continuous with the history of the field.
posted by atoxyl at 9:02 PM on May 12, 2023 [2 favorites]


tigrrrlily: If there isn't a hard and fast rule by which to decide who goes up in front of the firing squad and who doesn't, it's wrong to suggest that posters try to accommodate people's preferences. After all, someone could make an honest mistake and then have their entire life ruined.

This must be hyperbole! I've seen that before. OK, cool.
But what are you meaning to say? I'm serious, I don't understand. Firing squad? What?
posted by Too-Ticky at 12:03 AM on May 13, 2023 [4 favorites]


I don't understand all the people explicitly against this idea. Personally, I don't really care about AI stuff being tagged it whatever, but part of not caring is not having strong feelings against it.

What is the harm in putting [AI] on a link to a chatbot? Why be against doing that? It does nothing for me either, but it seems like a courtesy to those who do care, that costs me nothing.

And the fact that edge cases exist where AI might be involved in a way that isn't obvious or clear isn't a convincing argument against using the tag in the situations that are clear and obvious.
posted by Dysk at 3:07 AM on May 13, 2023 [8 favorites]


I kvetched about this very issue recently on the Blue (and asked for an 'AI Art' tag to be applied to the post with a flag, though that doesn't seem to have happened) so I'm admittedly biased in my support.

For me it's the equivalent of people tagging NYT or Guardian links to let users know what they're about to click on. It's not a rule, it's a courtesy. Whether or not AI is ~all around us~ or whatever is besides the point. If someone wants to, for whatever reason, avoid giving the NYT any views via their daily Metafilter browsing, they are given the tools to do so. Some users here (myself included) feel similarly about giving their time and attention to AI-created work. Fairly simple in terms of an ask, really.
posted by fight or flight at 3:13 AM on May 13, 2023 [7 favorites]


The thread that prompted this feels like Mefi functioning well...comments in the post immediately noted this was AI. I have no problem if people want to label their posts as AI involved, and don't understand why it would be an issue to consider this as a request.

Everything here doesn't have to be a fight.
posted by tiny frying pan at 4:57 AM on May 13, 2023 [4 favorites]


OMG Wordshore, I'm dying that your Contact Me page is blank. I love you.

Maintaining a balance of being simultaneously misanthropic, and self-employed, is sometimes tricky.
posted by Wordshore at 5:02 AM on May 13, 2023 [8 favorites]


Is this because people have a visceral uncanny valley reaction to interacting with AIs/LLM? I can understand that. When I once saw a delivery robot I felt extremely hostile and even murderous towards it. Same for the first time I saw car in self-driving mode, and the first few times I saw the google maps vehicle taking pictures of my street.

However, I don’t agree with tag rule. We should just exert influence through what we do best: complain incessantly to establish community norms voluntarily.
posted by haptic_avenger at 6:17 AM on May 13, 2023


However, I don’t agree with tag rule. We should just exert influence through what we do best: complain incessantly to establish community norms voluntarily.

...isn't that exactly what this thread is?
posted by Dysk at 6:24 AM on May 13, 2023 [7 favorites]


I will consider it as requested. cheers
posted by some loser at 7:11 AM on May 13, 2023 [1 favorite]


As an Early Agree-er here, I would offer a couple of examples:

1. A post linking to short story or poem that was written by software and which is compared to one written by a human. For the purposes of discussion, it would be a tiresome "gotcha!" to not know which is which (eventually).

2. Factual advice that's written by software, as we have seen, may contain a lot of errors/hallucinations. Until that level gets much lower -- say, down to the small portion or malicious statements by humans plus honest errors -- I want to know about it before I read it, so I can be suitably skeptical.

I don't think anyone on MeFi is ignorant of how much of the web is generated by software-as-plumbing now, but it's the individual pieces that are generated in response to a specific question/prompt that I want to be alerted to.

Is that at all helpful?
posted by wenestvedt at 7:15 AM on May 13, 2023 [2 favorites]


I wish to disappoint my enemies, a few of which are here on MetaFilter, by stating that at the moment of typing this, I am very much alive.

So you say. Or so ChatGPT says. Dun Dun Dunn!
posted by biffa at 7:34 AM on May 13, 2023


Is that at all helpful?

In a theoretical sort of way but I can’t find any posts in the last month that fit either of those categories.

I’m not trying to start a debate about it, but it would be good to see what post broke the camel’s back as well as other examples. Reading through a month of posts I feel like I’m having trouble discerning the type of post people would prefer to be marked.
posted by Tell Me No Lies at 9:31 AM on May 13, 2023


I feel like this is a situation where it would be helpful to hear again from the OP. I sympathize with the upset, as it is unpleasant to bump into content online you really don’t want to. The discussion is traversing various possible pro- and anti- topics, and I can see merits on either side, but it would make a significant difference to me to understand the source of the request. If it’s something not yet surfaced here, that would be valuable to know.

These kinds of discussions always bring out the ol’ “it’s just a norm we’re discussing, guys, jeez, no one’s holding a gun to your head, just relaaaaax.” MeTa is the space for working this stuff out, but there are already countless numbers of requests and discussions on MeTa about potential tags and posting practices. Some gain no traction, some gain a lot. To me, whether to consider another such request really depends on the why.
posted by cupcakeninja at 9:32 AM on May 13, 2023 [3 favorites]


Since we seem to be on the precipice of the web being completely inundated with AI and AI-generated material, I think simply tagging stuff with AI seems like it will soon become insufficient.

I feel like the simpler and more practical approach would be to create alternate versions of each subsite that are explicitly for AI content (MetAIfilter, MetAItalk, AskMeFAI, etc.). All posts and comments on these new subsites could be AI generated. This would leave the regular subsites for non-AI content.

To further simplify and reduce unnecessary confusion, each human user could have an associated AI username: snofoAIm for me, jessAImyn, cupcakeninjAI, etc. These new usernames would be exclusively for AI content.

I’m sure there are some things that might cause temporary confusion (am I commenting in Metatalktail Hour or MetatalktAIl Hour?), but I’m sure we would quickly get used to things. It will certainly be much easier than adjusting step by step as AI becomes more prevalent.
posted by snofoam at 9:56 AM on May 13, 2023 [2 favorites]


In addition to the tag on the post, there's an ABOUT button on the goblin tools site that lets people know that it's AI powered. I think if a MeFite vehemently wants to avoid a certain type of content that isn't as cut-and-dried as the topics that regularly receive CW tags, they have to click the "more inside" on the post and do a little due diligence of their own before utilizing a linked site.
posted by kimberussell at 10:28 AM on May 13, 2023 [2 favorites]


I think simply tagging stuff with AI seems like it will soon become insufficient.

Ignoring the snark, I disagree. The difference to me is being linked to something that's hosted on a site supported by AI (i.e. Google or whatever) and being linked to something explicitly made by AI, such as the post with the entirely AI-made trailer I linked to above, or an article entirely written by AI.

I'll refer again to my comparison to Mefites kindly noting when a link is hosted by the Guardian. Many Mefites have a moral objection to reading certain news websites, so prefer to be warned when a link goes there or when content comes from there. The same applies here. It's not an "I'm a Luddite and ignorant of the future" thing, it's a "I think AI is trash and don't want to waste my time looking at the content created using it" thing (and I think it's disingenuous to frame this as the former).
posted by fight or flight at 10:35 AM on May 13, 2023 [4 favorites]


the post with the entirely AI-made trailer I linked to above

This is like getting mad at Toy Story because it's made entirely by computers
posted by oulipian at 12:29 PM on May 13, 2023 [3 favorites]


Well, during the transition from hand-drawn to CGI, there were viewers who weren't aware of the distinction, and others who actively wanted to know.
posted by wenestvedt at 12:49 PM on May 13, 2023


This is like getting mad at Toy Story because it's made entirely by computers.

Not to derail, but Toy Story was entirely made by human artists and animators.
posted by fight or flight at 12:54 PM on May 13, 2023 [11 favorites]


I hope this fits within seriously and respectfully.

I’d be -1 on this. The proliferation of things people want content warnings on here and Masodon / the fediverse is getting unwieldy. I’m down for CW on things that are related to trauma, adult content, and a pretty wide swath of other topics / types of content. But.

This feels a bit like “don’t link to sites that use JS without warning (or, preferably, at all)” that I’ve encountered in free / libre software circles. Like ChatGPT, etc., or not - it’s rapidly becoming mainstream and asking for CW warnings for it feels extreme to me. I really hope this doesn’t become a norm on Mefi. (But I would abide by it if it did, of course.)

I have very very mixed feelings about AI tools in general. Well, more like 75% negative and 25% positive, but it’s not going away. And, as others have said, it’s entirely possible not to even realize something has this under the hood.

Politely, please no.
posted by jzb at 1:40 PM on May 13, 2023 [30 favorites]


I’m not trying to start a debate about it, but it would be good to see what post broke the camel’s back as well as other examples.

My apologies, the back-breaker was presented in this post.
posted by Tell Me No Lies at 3:12 PM on May 13, 2023


Not to derail, but Toy Story was entirely made by human artists and animators.

Oh sorry, I wasn't aware that every frame of Toy Story was drawn by hand, using no computer-assisted tools whatsoever
posted by oulipian at 5:13 PM on May 13, 2023 [6 favorites]


The proliferation of things people want content warnings on here and Masodon / the fediverse is getting unwieldy

I think this is the crux. Broadly informative description of posts should be uncontroversial as a virtue around here. But new entries on the explicit and implicit lists of “the wrong way to do things” are not something I feel that Metafilter needs more of, and… look, it’s hard to get around the fact that the initial request was not framed in terms of specific concerns but “I don’t like it and I don’t want to see it.” I don’t think that’s a great basis for rules. There have been some thoughtful comments about specific concerns about specific applications of AI in the thread, however, which I hope people will take into account in writing posts.
posted by atoxyl at 5:28 PM on May 13, 2023 [9 favorites]


It's not an "I'm a Luddite and ignorant of the future" thing, it's a "I think AI is trash and don't want to waste my time looking at the content created using it" thing (and I think it's disingenuous to frame this as the former).

Frankly I have more sympathy for arguments like “I want to avoid generative art posts because of the unsolved labor and attribution issues” - an honest Luddism, and I’m not using the term pejoratively - than “I want to avoid generative art posts because I presuppose they are all trash.” This is a site where people share things that they presumably believe not to be trash. It’s normal that not everyone will agree with that assessment, but we don’t need to write anybody’s sense of taste into the rules.

There’s an interesting uncanny valley sort of effect here, actually, in that people used to love art projects built on the limitations of generative neural networks - how many times has Janelle Shane been posted? - but now that they are sort of decent at imitating human work the reaction seems a lot more hostile. I have felt this myself - I really couldn’t give a shit about people showing off their photorealistic Midjourney movie stills or generic fantasy art. But come on - it is still ultimately a tool controlled by humans and I would really, really not want to be the guy betting that nobody is going to push its boundaries and make it do something exciting.
posted by atoxyl at 5:52 PM on May 13, 2023 [9 favorites]


I also realize that a lot of this sentiment stems from an oversaturation of hype and trash right now but hype is by definition something of a self-resolving issue in the long run.
posted by atoxyl at 5:58 PM on May 13, 2023 [1 favorite]


I'll raise my hand as not wanting to unknowingly click into ChatGPT content - either to waste my attention or to signal support for it with my clicks. I'm really uncomfortable with the way it generates misinformation. Calling it hallucinations doesn't make it either acceptable or charming. The terrible trend of bogus and useless Google results are bad enough. What did Neal Stephenson call it in Anathem? Bogons?

Posting something that's knowingly and willingly wrong or funny? Great! Mark it - whether with a tag or just in the wording of the post (below the fold is fine with me, speaking only for me). Passing something through that might be sincere/truthful/accurate/human, or is *pretending* to be sincere/etc. -- for my part, no thanks. I'm not against anyone *posting* that content; I just don't want to engage with it unwittingly.
posted by janell at 9:36 PM on May 13, 2023 [8 favorites]


Am fir noting links to AI driven tools in FPOs and comments and avoiding AI generated text in comments except if necessary for example. And people trying that shit in Ask absolutely need to knock it on the head.
posted by Artw at 8:14 AM on May 14, 2023 [3 favorites]


I'm happy to see AI tags on posts/links etc that use it, and I think people should be intentional about why it's there. Personally, the idea that I might be responding to artificially-generated questions in Ask, for example, bugs me. It feels like disrespect for my time and energy.

That said, I do take the point that AI might soon become so ubiquitous that tags or notations could be redundant or impossible to maintain, but I don't think we're there yet. During this transitional period, it seems fair to let people know what they're engaging with.

I also agree, though, that labelling should be viewed as a courtesy to other users, rather than a hard requirement, which might be difficult to enforce anyway.
posted by rpfields at 10:19 AM on May 14, 2023 [2 favorites]


About the “using your words” concern, 3a of the OpenAI EULA seems to expressly say they *aren’t* doing that. Alexa of course makes no such provision, and Bard sure sounds like they’re using whatever you feed them to better train the model.

I’m honestly still annoyed that we’re now colloquially calling LLMs “AI” and would hate to see that tag pop up on more things that are not in fact AI, which is thus far all of them. But I suppose that ship has sailed?

At any rate, I’m struggling to see how linking to an LLM is fundamentally much different from linking to any other site that collects user data, many of which are almost certainly collecting far more PII than ChatGPT is. I thought the post in question was pretty clear about what it was, and am sort of surprised anyone squicked out by this whole thing would click the link.
posted by aspersioncast at 11:02 AM on May 14, 2023 [1 favorite]


If one is making a FPP and they know there is AI-created content involved then it would be considerate to mark it as such. That's just being kind. I do not understand people being against being kind and considerate.
posted by terrapin at 11:57 AM on May 14, 2023 [6 favorites]


The FPP that was brought up as being a problem was marked as such, and still generated this MetaTalk post, so I think there's understandable wariness about what's actually being asked.
posted by lapis at 12:14 PM on May 14, 2023 [4 favorites]


I thought the post in question was pretty clear about what it was, and am sort of surprised anyone squicked out by this whole thing would click the link.

The OP was pretty clear about the problem and the implied solution, though?

i would like to respectfully ask the community to please consider placing some textual warnings in FPP text [...] the post had one single MeFi tags for AI, which was only visible from inside the post

I believe the OP is suggesting doing something like link [AI content] or similar in FPP where the poster is explicitly linking to AI-generated content. It doesn't need to be a rule, just like putting [SLYT] or [NYT] isn't a rule. Just an extra word or two typed out would be useful for some people in the community.
posted by fight or flight at 1:08 PM on May 14, 2023 [8 favorites]


And half this thread is people saying that tags seem reasonable, which was not the original request. And it seems like everyone is assuming whatever they believe to be reasonable to be the thing that everyone else is asking for or reacting against, but there's a fair amount of variation in the suggestions.
posted by lapis at 1:24 PM on May 14, 2023


I believe the OP is suggesting doing something like link [AI content] or similar in FPP where the poster is explicitly linking to AI-generated content.

That is not the request. It is that links that would have the user interacting directly with an AI be flagged.

As for me, put me in the "you already interact with thousands of AIs just by browsing" camp. Should any site with Google Analytics be flagged? A Facebook beacon? Any of the thousands of tracking ad companies? They're all using AI to analyze what you're doing, even if don't do anything beyond clicking the link.

Even if the goblin.tools system was programmed with specific experiences by the developer rather than using a GPT, that would loosely fall under the category of an expert system, which is an ..... AI.
posted by Candleman at 4:23 PM on May 14, 2023 [3 favorites]


(Speaking for myself, when I mentioned tagging comments, I meant using the inline square brackets in the FFP text, ala [tag]. I do not mean the line, actual post tags, which, as someone who uses mobile classic, might as well not exist at all.)


As for me, put me in the "you already interact with thousands of AIs just by browsing" camp. Should any site with Google Analytics be flagged? A Facebook beacon? Any of the thousands of tracking ad companies? They're all using AI to analyze what you're doing, even if don't do anything beyond clicking the link.

None of these constitute directly interacting with AI, ala chat gpt. You can't avoid AI in the same way you can't avoid road transport - everything you buy was delivered on a truck. It doesn't follow from there that it's pointless nonsense to avoid riding in cars. Similarly, there is a meaningful difference between using AI tools directly, and visiting a website that might have Google analytics.

Even if the goblin.tools system was programmed with specific experiences by the developer rather than using a GPT, that would loosely fall under the category of an expert system, which is an ..... AI.

By this logic, isn't every computer program AI? Like, I know we're expanding the definition a little if we include LLMs, but a tool written by a human programmer that does not use machine learning techniques, LLMs, or similar is generally not considered AI.
posted by Dysk at 5:57 PM on May 14, 2023 [3 favorites]


Nobody is using the marketing term “AI” for an expert system these days.
posted by Artw at 7:02 PM on May 14, 2023 [1 favorite]


As for me, put me in the "you already interact with thousands of AIs just by browsing" camp. Should any site with Google Analytics be flagged? A Facebook beacon? Any of the thousands of tracking ad companies? They're all using AI to analyze what you're doing, even if don't do anything beyond clicking the link.


This argument has come up a few times in this thread and I can't tell if it's disingenuous or if commenters are genuinely conflating these things. Regardless: what I'm pretty sure the OP is referring to is AI-generated content.

(Also, there exist tools to block Google Analytics, FB beacons, and so on, and I think a lot of people here choose to use them and at least disable as much tracking as possible. Equivalents for content don't exist.)
posted by trig at 11:32 PM on May 14, 2023 [5 favorites]


There is a substantial difference between the background ML-based technologies that have slowly crept into the plumbing of the modern web over the last twenty years, and the LLMs of the past couple years. Goblin.tools is palpably making calls to OpenAI’s API or one of their competitors, and this is obvious within 30 seconds of visiting the site.

To clarify what I wrote above: we can all tag sites that obviously use LLMs or generative art for maybe another two or three years before even that technology is so ubiquitous that avoiding it has become functionally impossible, and we will need to sunset explicit AI tagging at that time.

Giving people who have either uncanny valley reactions or ethical objections a few years in which to acclimatize at their own pace is worth the absolute minimal effort it would take the rest of us. I am, as is obvious from my comment history in general but especially the last few months a major enthusiast of this latest technology but also everything leading up to it from the earliest neural networks of the late 90s. I am also someone with mild spectrum disorder who actively struggles with empathy and has to work - and work hard - at it. I feel like I would be the first person to reject this notion and I really don’t, because this is clearly upsetting to some people. So I don’t understand why other people would be against it.

This is Metafilter. We are not Republicans. It requires almost zero effort to help ease people in. Why on Earth would we not do so?
posted by Ryvar at 1:09 AM on May 15, 2023 [3 favorites]


Should any site with Google Analytics be flagged? A Facebook beacon? Any of the thousands of tracking ad companies? They're all using AI to analyze what you're doing, even if don't do anything beyond clicking the link.

I think most of us have tools on our browsers to help us against ad tracking. I'm not aware of a block-LLM extension.

To everyone saying this is no different than the rest of the web, clearly the EU at least disagrees. And there's a reason Google Bard does not operate in the EU.
posted by vacapinta at 1:20 AM on May 15, 2023 [2 favorites]


LLaMA and its free community-driven alternatives are out there and accelerating to meet and potentially exceed what Google or OpenAI are doing over the next few months. There are replacement models that are fully legally sourced and unencumbered by any megacorp licensing strictures. The horse has fled the barn, it is freely available to anyone with a decent graphics card and neither the EU nor anybody else is going to be able to meaningfully regulate it.

The phase where you could sue maybe three or four companies and set usage limits ended this past March. We are already in the post-regulatory era. Another couple years until it’s everywhere, all the time, and the genie won’t be getting back in the bottle. And that does not change the fact that the right thing to do is make the transition easier for others with a quick bit of warning.
posted by Ryvar at 1:42 AM on May 15, 2023 [1 favorite]


To clarify what I wrote above: we can all tag sites that obviously use LLMs or generative art for maybe another two or three years before even that technology is so ubiquitous that avoiding it has become functionally impossible

I'm not sure that's at all true. Predictions are hard, especially about the future, but I wouldn't be so certain this latest silicon valley fad will have legs to even close to the extent that the pushers are claiming.
posted by Dysk at 2:20 AM on May 15, 2023 [4 favorites]


I feel like I would be the first person to reject this notion and I really don’t, because this is clearly upsetting to some people. So I don’t understand why other people would be against it.

Because there are lots of different things that are upsetting to lots of different people and it would become (even more) ridiculous around here if we kept adding to the list of random guidelines for posting here. This is not a friend group where of course we cater to individuals based on quirks; it's a public website that presumably wants new users to be able to understand it.

Because preference is not the same as oppression. A minority of users objecting to something because it reinforces societal oppression, or asking others to change their behavior, is one thing; "I just don't like it" is not the same thing and it's getting ridiculous how much people are treating it as if it is.

Because having clear and understandable guidelines, rather than a collection of hidden quirks, idiosyncratic enforcement, and an overwhelming list of rules, is part of what creates and supports equitable structures on a site like this, so that everyone can participate with equal understanding of the rules and those rules can be moderated consistently and fairly. Continually adding guidelines or expectations based on preference (as opposed to actual group harm) goes against that.

I get that people have personal triggers or dislikes, but part of interacting with other people in public or semi-public spaces is needing to learn how to navigate that, and not just ban or heavily police things we don't like. My dad recently died and conversations and articles about fathers are difficult for me right now, but it would be ridiculous for me to say that all mentions of fathers on this site need a trigger warning. I can ask close friends to be gentle with the topic for a bit, but it's unrealistic to extend that to the public at large. That's something I need to navigate for myself and be willing to step out of conversations when I need to, not blame the people having the conversation.
posted by lapis at 7:27 AM on May 15, 2023 [54 favorites]


Actually adults navigate participating in a society and having societal conventions all the time.
posted by Artw at 7:43 AM on May 15, 2023 [5 favorites]


those rules can be moderated consistently and fairly. ... not just ban or heavily police things we don't like

The OP did not ask for that. No one in this thread has asked for that. Bans and policing or any form of enforcement or moderator involvement have only been brought up by people who don't want to do this thing in FPPs, which is the only thing that the OP asked for. Not every mention of AI, not every link in any comment, just "Hey, if you're going to make an FPP whose entire point is Here's a thing where you interact directly with an AI, please tell us that first, especially if the thing doesn't tell you that either."
posted by Etrigan at 7:45 AM on May 15, 2023 [6 favorites]


I'm just going to say again that:

I do want people to feel good reading here, and am glad to be aware of issues

and at the same time:

About 97% of the time I think about posting something, I give up because of the cognitive load of carrying all these asks.

The reason I'm saying this again is I hear a lot of "this is a small" ask and yes, yes it is, but I feel like there are like 100 small asks that I have to think through. So they are no longer small at all.

And as much as I don't love 'em, on Twitter or Facebook I can just drop a link and hit send man.
posted by warriorqueen at 8:09 AM on May 15, 2023 [46 favorites]


lapis has it. I cannot count the FPPs that I have considered and not made because of possible issues like this. The labor to post to MetaFilter, if you want to be a conscientious community member, is nothing like trivial.

The more I read MeTa, the more I realize that the blend of posting rules, norms, and “where appropriate” actions outweighs the effort to post on my workplace’s internal blog, where all of that, plus a host of laws and career factors apply. Obviously the front page keeps going here, and many users do not (I assume) much read MeTa, and thus perhaps don’t consider the various requests in MeTa, but I started reading MeTa more regularly last year in order to be a more engaged site member. That may have been a mistake.

On preview: also, “what warriorqueen said.”

(And my condolences to you, lapis.)
posted by cupcakeninja at 8:24 AM on May 15, 2023 [15 favorites]


if you want to be a conscientious community member

Yeah, this. This post (and its tags) asks for a trigger warning, which is a much higher level of responsibility than marking content or tagging. Several people have said "Oh, just don't do it if you don't like it" which as far as I am concerned is not an option for a trigger warning.
posted by Tell Me No Lies at 9:07 AM on May 15, 2023 [1 favorite]


This argument has come up a few times in this thread and I can't tell if it's disingenuous or if commenters are genuinely conflating these things. Regardless: what I'm pretty sure the OP is referring to is AI-generated content.

There's no conflation. The internet is full of AI and that's going to continue full hog. For example real estate agents were very quick to adopt LLM to spit out property descriptions. I have no idea what on Zillow is human or LLM based, and have no idea whether to tag it or not.

The main complaint about LLM is that it is replacing writers. You have probably read lightly edited LLM text today and don't know it, and if not you certainly will be encountering it daily starting sometime in the next six months.
posted by Tell Me No Lies at 9:13 AM on May 15, 2023 [2 favorites]


Yes, and that is a problem, but on a broader societal level that MeFi is not particularly equipped to deal with, unlike “please stop posting chatGPT shit unannounced”.
posted by Artw at 9:27 AM on May 15, 2023 [4 favorites]


If nothing else, if this means that Metafilter decides (as a community or as an official body) that there will be FPPs being posted which contain links to or are even written by AI without tags or oversight, I know I personally would rather have an official head's up so I can officially cut ties with the site.

It's bad enough to be constantly doing the "can I trust whether this was written by a human or not" dance everywhere else, I'd rather not waste time on the Blue if we're totally giving up due diligence and deciding it's ok to just shrug and let it all happen because, who cares, it's going that way anyway, right?

Like, I don't think there's enough thought going into the implications, in terms of the integrity of the site as a whole, of allowing Metafilter to just be overrun with AI-made content without tags or content notes or anything. This really feels like wandering blindly into an uncomfortable place and I'm pretty sure many users and donators to the site would rather know if this has the potential to become official-unofficial policy (if that's what's being discussed ITT).
posted by fight or flight at 9:36 AM on May 15, 2023


Posting chat generated content would probably just come under existing spamming rules.
posted by Artw at 9:44 AM on May 15, 2023 [3 favorites]


And just general "is this material good enough for a post?" considerations.
posted by lapis at 9:54 AM on May 15, 2023 [3 favorites]


if this means that Metafilter decides (as a community or as an official body) that there will be FPPs being posted which contain links to or are even written by AI without tags or oversight

if we're totally giving up due diligence


I don't know if you recognize how entitled and condescending you're coming off with these demands?

First, there's perhaps 100 users posting in this thread, which is a fraction of the active userbase, which is a smaller subset of the entire userbase. Even if everyone here were in complete agreement, we still wouldn't have decided "as a community" on anything. Certainly there's been no Jessamyn response yet. I also don't know what kind of "oversight" you're suggesting, since the only thing requested is that posters identify links to AIs/LLMs, not refrain from posting them.

As for due diligence? You're not "due" anything, in particular. No one here owes a duty to you when posting, outside of what's already in the Guidelines. Do you see the notes above, where long-term site users are saying how the mounting demands of "how a FPP should look" is deterring them from posting? Can you consider how quickly new users might immediately bounce off this site, reading language like yours?
posted by The Pluto Gangsta at 10:04 AM on May 15, 2023 [11 favorites]


if this means that Metafilter decides (as a community or as an official body) that there will be FPPs being posted which contain links to or are even written by AI without tags or oversight

if we're totally giving up due diligence

I don't know if you recognize how entitled and condescending you're coming off with these demands?


These are not in any way demands, nor is the rest of the comment you excised these from. They're saying this thing affects my comfort level with participating in this site in some way, which is exactly what you're simultaneously saying is the problem with people asking meekly whether MeFites might consider attaching a couple of words to some small sliver of posts.
posted by Etrigan at 10:10 AM on May 15, 2023


About 97% of the time I think about posting something, I give up because of the cognitive load of carrying all these asks.

The reason I'm saying this again is I hear a lot of "this is a small" ask and yes, yes it is, but I feel like there are like 100 small asks that I have to think through. So they are no longer small at all.


I wonder how much of this load can be lifted with tech improvements (and, given how slowly those come these days, if it's something that could be opened up for volunteer effort). Adding archive links to posts (and comments) could be done automatically, annotating domains could be done automatically, and there could be some quick check marks for various content warnings.
posted by trig at 10:15 AM on May 15, 2023 [1 favorite]


if this means that Metafilter decides (as a community or as an official body) that there will be FPPs being posted which contain links to or are even written by AI without tags or oversight

I feel like the conversation is kind of jumping around unproductively, and this is just the latest example.

I think it's not wildly controversial to say that it's possible to have a wide array of opinions on the following, and that they shouldn't be conflated:
1) Whether links to sites where you interact with an LLM should be flagged in post text (which is what I understand the original post to be asking for),
2) Whether we should extend that flagging to either a) A.I. *generated* content, like images made in Midjourney or b) sites that use some sort of machine learning,
3) Whether we should allow AI-written content as posts or comments on Metafilter, which seems to be pretty against the rules already and beyond the scope of this thread.

But, returning to the quote, I don't see how saying "user's discretion" to 1) means we can't draw a hard line on 3).
posted by sagc at 10:26 AM on May 15, 2023 [5 favorites]


Good point, sagc. I will say my comment was largely reacting to the attitude of some posters here that "AI is already everywhere so what's the point in resisting it". That attitude is what worries me more than the pushback on tagging AI generated content. Though I accept that might not have come across in my comment. I'll probably step back from this thread after this comment, though I will be keeping an eye out for any official moderators who (hopefully) might weigh in.

(As an aside, @The Pluto Gangsta, as a long term user who has been on this site for 13 years, I would hope to deserve as much respect with regards my comfort level as any other user, including new ones. I will say the way this thread has gone has been one of the most off-putting experiences in a while, so maybe I just need to accept that Metafilter-as-it-is-now is no longer a place I want to spend time in. Oh well.)
posted by fight or flight at 10:39 AM on May 15, 2023


As a group, we sort of mostly know to roll our eyes at stuff like web3-hype financial grifters, fairly-unambiguously toxic companies like Microsoft or ADM or Equifax, and it's probably not unreasonable to just bring up "oh, hey, OpenAI is among these shitty exploitive companies that just still has some decent PR." Some folks are going to find the ethics around LLMs in general, the LLMs in broad use, and OpenAI as an entity in particular kind of icky to engage with especially at this point in the hype cycle.

Do we want to normalize tagging every potential direct interaction with OpenAI APIs? They're getting ubiquitous because of how trivial and cheap it is to write software for at this point in the market capture cycle. It'll be hard to catch 'em all, and we might want to be forgiving if we don't. It'd be nice to keep in mind that they're kinda gross, though.
posted by majick at 10:55 AM on May 15, 2023


OP’s profile also states “i don't see titles on posts”, so it doesn’t seem like they’re actually taking that much care in screening what they’re interacting with in general.
posted by not just everyday big moggies at 11:26 AM on May 15, 2023 [1 favorite]


Ehh… that’s Mefi’s weird history setting them up for a fall there. IIRC when seeing them on the homepage was made an option it was absolutely the understanding that the body text would remain a complete description of the post contents.
posted by Artw at 11:33 AM on May 15, 2023 [1 favorite]


Based on the original posters current user profile and description of having lost something they can never get back the initial objection seems almost religious or ritual impurity based in nature and to be honest I have a hard time wanting to use that as a basis for site policy.

The OP did not ask for a change to site policy.
posted by Etrigan at 11:44 AM on May 15, 2023 [3 favorites]


i am fairly upset and i hope to never experience this interaction again, which is why i ask the community to please consider this request.

I don’t see anything but policy addressing OP’s desire to literally never interact with AI again. “The community” has considered the request, so I guess we’ve done what was asked and can close up this thread?
posted by not just everyday big moggies at 11:52 AM on May 15, 2023 [3 favorites]


This argument has come up a few times in this thread and I can't tell if it's disingenuous or if commenters are genuinely conflating these things. Regardless: what I'm pretty sure the OP is referring to is AI-generated content.
i had clicked on the goblin.tools link and unknown to me i interacted with chatGPT, which is something I never, ever, ever, ever wanted to do. i am fairly upset and i hope to never experience this interaction again
Who is being disingenuous here?

By this logic, isn't every computer program AI?

No. Expert systems are specifically designed to model human responses to a situation, which is basically what goblin.tools does.

None of these constitute directly interacting with AI, ala chat gpt. You can't avoid AI in the same way you can't avoid road transport - everything you buy was delivered on a truck. It doesn't follow from there that it's pointless nonsense to avoid riding in cars. Similarly, there is a meaningful difference between using AI tools directly, and visiting a website that might have Google analytics.

If you have done a Google search in the past few years, you've directly interacted with an AI. And Bing searches. And DuckDuckGo. And Facebook, Instagram, WhatsApp, etc. Maybe not Signal.

Nobody is using the marketing term “AI” for an expert system these days.

The difference between an LLM and an expert system is a smaller and better tuned set of data on the expert system side of things. I don't know of any scholar of AI that has disclaimed expert systems as no longer being AI, but if you know of any, please inform me.

Because there are lots of different things that are upsetting to lots of different people and it would become (even more) ridiculous around here if we kept adding to the list of random guidelines for posting here.

Agreed.

I am one of the few proponents of intellectual property here but I've never suggested that anyone that posts "pirated" content (or implications thereof) to have a trigger warning here.

If nothing else, if this means that Metafilter decides (as a community or as an official body) that there will be FPPs being posted which contain links to or are even written by AI without tags or oversight, I know I personally would rather have an official head's up so I can officially cut ties with the site.

Yet again, that is not what the ask is. The overwhelming consensus over multiple metatalks is that entirely generated by AI content should not be allowed unless it has an extremely novel result. In this specific case, there is no way to create application that could do what it purports to do without some kind of AI, so the question is whether the tagging was sufficient.

lapis has it. I cannot count the FPPs that I have considered and not made because of possible issues like this.

Again, agreed. Expecting that posters be aware of hundreds of potential non-obvious triggers just means that people are less likely to post here which leads to the overall decline of Metafilter as a platform.
posted by Candleman at 12:14 PM on May 15, 2023 [2 favorites]


The OP did not ask for a change to site policy

That's fair but they are advocating at the very least for a change in community etiquette and I'd still find that request difficult based reasons I elaborated on.
posted by Ferreous at 12:21 PM on May 15, 2023 [2 favorites]


The difference between an LLM and an expert system is a smaller and better tuned set of data on the expert system side of things. I don't know of any scholar of AI that has disclaimed expert systems as no longer being AI, but if you know of any, please inform me.

Scholars of AI are irrelevant to hype cycles. The present hype cycle does not apply to anything that doesn’t roam the web ripping everyone off for a massive dataset, so your expert systems can safely sit this one out.
posted by Artw at 12:24 PM on May 15, 2023 [3 favorites]


Indeed, it does not feel right.

It also feels typical of the snide mockery that is directed at anyone who dares to suggest that certain Mefites' latest favorite toy is actually a pile of pernicious, actively harmful crap that the rest of us do not wish to be exposed to. "Nobody's forcing YOU to engage in these threads" they sneer, as they proceed to drown the entire front page and every single thread in GPTcruft.

It's pretty clear that asking nicely for people to be considerate is a waste of time, and will only result in the asker being made a target of these kinds of personal attacks.
posted by Not A Thing at 1:37 PM on May 15, 2023 [5 favorites]


Eh, if you think a reference to a fictional jihad against all thinking machines is relevant enough to your presence here to make it the central part of your profile, I think it's fine for people to point that out as part of trying to understand some of the reasoning behind your request to to never, ever interact with LLMs.
posted by sagc at 1:41 PM on May 15, 2023 [3 favorites]


"Nobody's forcing YOU to engage in these threads" they sneer, as they proceed to drown the entire front page and every single thread in GPTcruft.

Huh? Who is “they”, and when did this happen?
posted by not just everyday big moggies at 1:42 PM on May 15, 2023 [7 favorites]


Mod note: A couple comments unrelated to the post about the OP deleted.
posted by loup (staff) at 2:01 PM on May 15, 2023 [2 favorites]


If you have done a Google search in the past few years, you've directly interacted with an AI. And Bing searches. And DuckDuckGo. And Facebook, Instagram, WhatsApp, etc. Maybe not Signal.

If you cannot understand how that is meaningfully different to reading a bunch of chatGPT output, I just don't know what to say to you. Optimised search is meaningfully different to text generation. The fact that we use 'AI' to refer to more than just LLMs doesn't mean that it doesn't make sense to feel differently about LLMs to other things that we use the same label for.
posted by Dysk at 3:40 PM on May 15, 2023 [4 favorites]


this post tricked me into using AI and i find that repugnant since i have hertofore avoided it vigorously and vehemently. i feel kind of sick. i can never get this back.

This here is OP’s comment in the post that prompted this meta. Can anyone tell me what it is that OP “can never get back”? Like, could someone unpack the grievous irreparable harm that was caused to them by goblin.tools? I can understand not wanting to interact with LLM’s, but frankly the irreparable-loss claims seem histrionic unless I’m missing something major about the situation.
posted by not just everyday big moggies at 3:59 PM on May 15, 2023 [13 favorites]


Everyone being weird about the OP is grossing me out a little.
posted by Artw at 4:10 PM on May 15, 2023 [12 favorites]


To move this away from the OP, perhaps this framing would be helpful:

What part in the process of making a FPP could reasonably include consideration of the question at hand? Is this a pre-writing checklist, a keep-in-mind-while-writing checklist, or something to look at as a last step? Or more specifically of a conceptual thing to bear in mind generally?

I would be particularly interested to know what Not A Thing, Etrigan, or fight or flight think about this. I am trying to square the basic sentiment y’all have expressed—broadly speaking, that we should be kind to each other—with what seems to me like a fairly long list of asks that have surfaced on the site over the years.

trig’s idea of automatic annotations aside, I don’t know how I would go about incorporating this into my posting (such as it is), or how to rank this request vs. not linking to the Guardian or Twitter or whatever. It sounds to me, and please tell me if I’m reading the room wrong here, that I/we are being told that we should be taking seriously most any request about posting from someone who is very upset. (I’m not trying to be hyperbolic or disingenuous.)
posted by cupcakeninja at 4:38 PM on May 15, 2023 [3 favorites]


“sort of a conceptual thing”
posted by cupcakeninja at 4:44 PM on May 15, 2023


It's creepy. I did the same thing as OP. I went to that site and plugged in "clean the bedroom," was issued a generic list of steps, clicked back to the thread, read that you have to click the "more peppers" thing on each step, clicked more peppers on "make the bed" and was told to "take the sheets off the bed and fold them neatly and put them away," whereupon the penny finally dropped and it occurred to me that I was mindmelding with my nightmare and I skittered away. Although, okay, I didn't right away. I admit I first asked it how to make croissants and was told I had to make a butter block and manipulate it and the insentient slimemold from the uncanny valley mixed up the rough puff method and the stupidly impossible butter slab method and also for several steps just forgot all about the dough: I was just going to endlessly fold and roll and refrigerate and fold and roll and refrigerate a naked slab of butter.

It's creepy creepy creepy. I don't understand why so many people are like "what? calm down. it's Bing." It's not for fucks sake bing, it's grammerly on space crack. I hate it a very big lot, and I can't understand why it wouldn't be totally reasonable to be unnerved to scared to absolutely terrified of it. Anyway, it's brand damn new. A brand new thing. Why isn't it reasonable to want to talk about it and how best to use it here?

If your phobia is fear of bicycles or butterflies, you probably aren't going to say you want a trigger warning on it, but if it's fear of interacting with weird undead creatures that shambled out of the mutated collective consciousness and are rapidly eating the culture, well, then I think enough people share that feeling that you wouldn't be unreasonable to at least mention that it bugs you and if anybody offering up chances to interact with the borg has an extra couple of seconds to say "this is one of those fuckedup AI thingies," that would be great.

Can anyone tell me what it is that OP “can never get back”?
I can't, not being OP, but I can tell you how it felt for me when I realized what was up. I felt creeeeeeeeeped out in a profound way. Everything I typed has allowed it to ...guh! gak! "learn." So I've contributed, now, to the Ice9 sludge inexorably sliming its way over and subsuming the culture. barf x a million.
posted by Don Pepino at 4:54 PM on May 15, 2023 [6 favorites]


I would be particularly interested to know what Not A Thing, Etrigan, or fight or flight think about this. I am trying to square the basic sentiment y’all have expressed—broadly speaking, that we should be kind to each other—with what seems to me like a fairly long list of asks that have surfaced on the site over the years.

Not. Saying. “No.”

That’s it. That’s what I’d like to see more of on these MeTas. Just read what a fellow MeFite is asking for, think about it, process it, consider it, and then go about your day, even if you disagree, because your lack of speaking against it isn’t going to be the one thing that causes that request to be carved onto the stone tablets that each moderator has installed over their monitor in the BAN ANYONE WHO DOES THIS section. If you don’t want to do the thing that they’re asking for, then you simply don’t. You move forward. Maybe someday you make an FPP that points to a ChatGPT-powered website and you forget this ever happened and you don’t put “[AI]” next to it. And if glonous keming stomps into that FPP and comments “GODDAMMIT I ASKED YOU TO LABEL THESE BACK IN MAY OF 2023 DON’T YOU REMEMBER THAT?!?!?”, then you shrug and go about your day again.

People don’t remember That Time You Accidentally Offended Someone. People don’t even remember That Time You Did That Thing That Clearly Violated The Established Norms Of MetaFilter. What people end up remembering is That Time You Did That Thing And Got Called Out And Then Dug In And Made It A Whole Huge War.
posted by Etrigan at 4:59 PM on May 15, 2023 [4 favorites]


I’m wondering if the issue is repeat posts (he said, repeatedly posting), leading to a feeling of a pile-on. I don’t like piling on, or being piled on, so I’ll have to think about situations where it’s worth replying to requests on MeTa, vs. pondering them and moving on.
posted by cupcakeninja at 5:18 PM on May 15, 2023


Not. Saying. “No.”

Lol so the one thing you want is to not have anyone openly disagree with you?


I’m not the one who posted this. And it’s not disagreeing, it’s refusing. Recall that the OP asks us to “consider placing some textual warnings”. Not to never do it. To consider it, in the future, when posting things that lead directly to interacting with machine-learning systems. And people have not only refused to agree that they would consider doing that thing at any point in the future if that situation ever comes up, but they have taken the time to say, loudly and repeatedly, that they are mad that anyone would dare to ask them to consider doing that thing at any point in the future if that situation ever comes up.

Why? Why is it so important that the OP and everyone else knows that you refuse to do it? Why do you find it necessary to make sure I know that you’re laughing at that? Why can’t you and others just shrug and go about your day, secure in the knowledge that the odds of you ever suffering the slightest sanction, real or imagined, for refusing to do this thing that someone asked are so low as to be zero?
posted by Etrigan at 5:53 PM on May 15, 2023 [2 favorites]


I've appreciated the habit of some members hiding LLM-generated text behind the spoiler tag when quoting it in comments. I hope that continues.
posted by figurant at 6:08 PM on May 15, 2023 [3 favorites]


Everyone being weird about the OP is grossing me out a little.

Definitely +1 on this.
posted by GenjiandProust at 6:09 PM on May 15, 2023 [2 favorites]


Scholars of AI are irrelevant to hype cycles. The present hype cycle does not apply to anything that doesn’t roam the web ripping everyone off for a massive dataset, so your expert systems can safely sit this one out.

Your feelings do not negate serious scholarship any more than some random person's feeeeelings about covid vaccines so why don't you sit this out yourself? OP failed to specify why they don't want to interact with a GPT so your feeeeelings about datasets are just your personal biases. You are some guy on the internet, your thoughts on the ethics of AI are not as important as people who have done serious scholarly work on the subject, just like Joe MAGA's thoughts on vaccines aren't as valid as medical scholars'. Get over yourself. I asked you to cite an actually informed figure in the field and you failed to do so.

If you cannot understand how that is meaningfully different to reading a bunch of chatGPT output, I just don't know what to say to you. Optimised search is meaningfully different to text generation.

Optimized search is literally using AI to try to get you to click on links in the results via text generation/selection. And yet again, the ask here is not about text generation but interacting with an AI. I really don't know how to get this across to you beyond telling you to go re-read the ask until you understand what the words in it mean. The words interaction (or its variants) appears four times, making it the most commonly used word outside of the common words (I, a, the, etc.).

I've appreciated the habit of some members hiding LLM-generated text behind the spoiler tag when quoting it in comments. I hope that continues.

That is still not what OP is asking for. What part of interaction is not clear?
posted by Candleman at 7:33 PM on May 15, 2023


That is still not what OP is asking for. What part of interaction is not clear?

No, but it would be a polite extension of something I've already seen on the site. Sorry to get you steamed.
posted by figurant at 7:38 PM on May 15, 2023


Sorry to get you steamed.

People have repeatedly misrepresented what OP clearly asked for here; adding to the confusion doesn't help.
posted by Candleman at 7:40 PM on May 15, 2023


And yet again, the ask here is not about text generation but interacting with an AI

Read "interacting with" in the sense of 'talking to' but via text, and you'll see how I don't think that's a contradiction.
posted by Dysk at 7:52 PM on May 15, 2023


Given that I said that if you've done any kind of search on almost any major platform you've interacted with an AI and you gave me a snotty response, I really don't know what your point is. How do you think Google guesses what you're trying to ask when it gives you (the often incorrect) answer blurbs?
posted by Candleman at 8:11 PM on May 15, 2023 [1 favorite]


I'm not trying to be snotty, so apologies if that is how it came across.

It's a different sense of "interacting with" in question. Again, if you don't see the functional difference between reading AI generated sense ("interacting with" in the sense of a conversation) and getting search results from an algorithm that has some machine learning behind it ("interacting with" in a much broader sense) then I don't know what to tell you. If you do understand the difference, I don't understand why you're insisting that the second sense is the only interpretation that's valid or makes sense.
posted by Dysk at 8:30 PM on May 15, 2023


Dysk, do you see a difference between prompting an AI to write some text based on your prompt, and reading something created by ChatGPT? Because I feel like the original post is, again, much more concerned with interacting with a LLM, as opposed to consuming content created by an LLM.
posted by sagc at 8:36 PM on May 15, 2023


Why? Why is it so important that the OP and everyone else knows that you refuse to do it? Why do you find it necessary to make sure I know that you’re laughing at that? Why can’t you and others just shrug and go about your day, secure in the knowledge that the odds of you ever suffering the slightest sanction, real or imagined, for refusing to do this thing that someone asked are so low as to be zero?

I shouldn't bother, but what the heck, I'll bite. Here's why it's important to me: in any sort of discourse, there's a balance between how much of a listener's responses, reactions, and feelings can be laid at the feet of the speaker, as being their responsibility, and how much of the listener's responses, reactions and feelings should be understood as being the listener's own responsibility. I'm not gonna claim the ideal split is exactly 50/50 - I think you have to allow some variance for the specific content, whether somebody's deliberately trying be inflammatory or get an emotional response vs. just being informative, and so on - but I will say that I don't think that either absolute extreme is healthy for discourse.

It's been my observation for - oh, at least 10 years now - that in online leftist/progressive circles generally, and here on Metafilter more specifically, the overall trend has been to make the speaker responsible for more and more and more aspects of how the audience reacts to their speech, and to hold listeners increasingly blameless for any reaction they might have no matter how extreme.

I think that trend originally was born out of a noble, progressive impulse, to try and balance the scales between speakers (often white, male, cis and rich) that had lots of power (which is why, in many cases, they got the opportunity to speak in the first place) and listeners (often none of those things) that had little or no power. Nevertheless: neither extreme, as I said, is healthy. Too little responsibility being put on the speaker creates a situation that lazy uninspired comedians have argued for since forever, where they can say any awful shit they want and then blame the audience for their inability to 'take a joke' - but too little responsibility on the listener creates real problems as well. It cripples the ability to have conversations; in a situation like that many people (frequently the most empathetic, most considerate people) who would otherwise have something to say will default to silence, rather than risk upsetting someone else and being blamed for it. Posters who are brave enough to post anyways will front-load posts with disclaimers and warnings and often feel like they're walking on eggshells even so. That heightened sense that you'll be blamed for any misstep means people go into conversations feeling defensive already. There are people in this very thread pointing out that the long list of informal demands on posters already makes them too uncomfortable to post. Surely their feelings - whether you see them as rational or not - are just as valid as the OP's feelings - rational or not - aren't they? But you seem to have pre-decided that a listener who says "hearing about X makes me uncomfortable, can we not do that" is inherently more valid than a speaker who says "being expected to jump through hoop Y makes me uncomfortable, can we not do that", because you've already accepted that a speaker carries the responsibility for both their own and their listeners' feelings.

Me? I think that attitude is killing discussion. I think there's a straightforward correlation between the growth of that perspective on this website, and the decline in posting and commenting on this website. I think if you want to have lively discussions, the balance needs to shift back the other way a few steps; people need to take some responsibility for how the thing they read/saw/interacted with made them feel, and not try to make their bad reaction into the poster's or community's problem. I think that if we must make a choice between privileging the feelings of people who think adding one more thing for posters to consider is perfectly good and reasonable, or privileging the feelings of people who think the number of things they have to consider before posting already makes it simply too daunting to post, then while we'll inevitably be unfair to someone's feelings, one choice we can make leads to more posts being made on this website, and one choice leads to less posts being made on this website. And so this? This, to me, is yet another attempt to push things in exactly the wrong direction to keep this website alive. So this time, since I'm here right now, but frankly not here often enough anymore to still care what people here think about me, I'm gonna push back. That's why.
posted by mstokes650 at 10:43 PM on May 15, 2023 [131 favorites]


do you see a difference between prompting an AI to write some text based on your prompt, and reading something created by ChatGPT?

Yeah, a small one - in trying to drive the point that "interacting with" can easily be understood to mean "talking to", my language may have become unclear. It's reading chat gpt generated text In response that the OP raises as an issue. My point was more that this is a meaningfully different class of interaction to e.g. getting a page of Google search results, and people can perfectly reasonably feel differently about it.

Personally, I would rather all chat gpt output be labelled (not because it's creepy, but because I don't like wasting my time on drivel), but I can understand finding the conversational aspect much creepier, even if I don't myself.
posted by Dysk at 1:01 AM on May 16, 2023 [4 favorites]


trig’s idea of automatic annotations aside, I don’t know how I would go about incorporating this into my posting (such as it is), or how to rank this request vs. not linking to the Guardian or Twitter or whatever.

In some ways this thread is reminding me of the old debates about mystery meat posts versus more descriptive ones. I read the OP request as basically the same as other requests to let readers know what a link is, as in "goblin.tools is a collection of small, simple, single-task Chat-GPT tools, mostly designed to help neurodivergent people with tasks they find overwhelming or difficult" (where all of the non-emphasized part is the original description).

In more complex posts with lots of links this can sometimes get unwieldy, and there genuinely is a cognitive load in (a) thinking about which things people would appreciate knowing and (b) thinking about how to word / format the descriptions, which would be really nice to just automate away. I also don't see any reason these descriptions have to be above the fold; I think it's okay to expect readers who don't like mystery meat to click into the thread itself. But describing the salient features of a link is a reasonable thing to do, and I read this thread as saying that the OP feels content automatically generated by AI is a salient feature. (I currently agree, regardless of what things might look like a few years from now.)
posted by trig at 2:10 AM on May 16, 2023 [4 favorites]


This is a disturbingly outsized reaction to a request to consider something, and the convo is more damaging than the request itself.

I would like Metafilter management to discuss not letting the community weigh in on MeTa in threads like these.
posted by tiny frying pan at 5:17 AM on May 16, 2023 [7 favorites]


If I were trying to avoid interacting with an AI on the web, I would research every site I am on before interacting with it. To me, this seems like the only practical way of interacting with online AI as little as possible.

To me, a trigger warning is used to give people a head's up for something that will be on the page in front of them when they click through. If I were linking the front page of wikipedia, I wouldn't put trigger warnings for things just because you could search for them. But if the front page featured something trigger-warranting, I would put a warning.

On a site with thousands of users, there are surely thousands of things that people would prefer not to see. I am vexed when I click through to an article on a small town newspaper that is completely subscriber access only. I agree with many of the sentiments above that posts are already subject to a lot of scrutiny, officially and just from the community. It can already be hard to keep track of the various elements of good posting etiquette, some of which are non-obvious. I think we should absolutely protect against the most harmful things and putting people in a position of being exposed to things unexpectedly. The OP in this thread clicked through a link to a site where they still had the opportunity to avoid the thing they wanted to avoid. I think posting norms should be focused on things that are not easily controlled by the post reader.
posted by snofoam at 5:26 AM on May 16, 2023 [7 favorites]


This is a disturbingly outsized reaction to a request to consider something, and the convo is more damaging than the request itself.

It might be more useful to point out specific responses that seem "disturbingly outsized." When someone asks something on a place devoted to discussion, I would expect people to discuss it. I guess it is true that there's no space for people to make a suggestion/request/complaint and have no follow-up discussion. But would such a space even be useful?

I would like Metafilter management to discuss not letting the community weigh in on MeTa in threads like these.

I know that the future structure of the site is being worked on behind the scenes, but this doesn't seem consistent with the stated desire to move towards a community governance model.
posted by snofoam at 5:42 AM on May 16, 2023 [17 favorites]


It's pretty much a given that whenever someone suggests something, there will be some people that disagree, and some subset of those people will get into arguments with people who have the contrary view. I think that, if there is a problem in that dynamic, the problem is centered on the entrenched arguments that pick up midway through a thread and then continue for days and days until the dead horse is completely stomped into the dust. There may be some wisdom in terminating threads that have thoroughly run their course or even time-limiting some threads for X days or something like that. For example, I think we've covered all of the sensible opinions about this particular suggestion and people are now just arguing entrenched views, or parsing text with a microscope, or telling other people they shouldn't post their ideas, etc. Probably not super valuable stuff. But I don't agree that there should be like a "suggestion box" where people post thoughts and nobody is allowed to respond.
posted by Mid at 6:19 AM on May 16, 2023 [5 favorites]


It might be more useful to point out specific responses that seem "disturbingly outsized."

I think they are pretty obvious.
posted by tiny frying pan at 7:29 AM on May 16, 2023


Here’s one to get us started:

I would like Metafilter management to discuss not letting the community weigh in on MeTa in threads like these.

Seems like an a disturbingly outsized and frankly anti-community stance to take on a post that was explicitly made to “respectfully ask the community to please consider” an action.
posted by not just everyday big moggies at 7:47 AM on May 16, 2023 [19 favorites]


Seems like an a disturbingly outsized and frankly anti-community stance to take on a post that was explicitly made to “respectfully ask the community to please consider” an action.

I think some of the tension in this thread is because some people are seeing the MeTa post as asking individual users to individually consider this individual opinion and make individual choices about what they individually want to do with that information. Others (including myself) see MeTa posts as a way of asking the MeFi community as a whole to collectively consider an individual opinion and make collective choices about what we collectively want to do with that information.

Which points to part of the issue here being a disconnect in what users are expecting in posting or in responding. The fact that moderators and owners seem to have stepped away from MeTa a long time ago further confuses things. I do think MeTa used to function as a much more "collective decision-making" space, or at least a space for gathering diverse opinions before Matt made a final decision (or actively chose not to), and so it makes total sense that people who view this space in that way would be using it to persuade others to their own viewpoint. It may be that the lack of moderator interaction over time has created at least some expectation that we're not doing collective decision-making (or lobbying, really) here, and I can see why it would seem cruel to them that people seem to be "voting" or pushing opinions until a decision is made (because they're not seeing this as something that needs a collective decision).

It would help if the site (jessamyn, moderators) could start clearing up the purpose and function of MeTa, or at least start having those conversations. Because right now it feels like everyone's feeling bruised and battered and I suspect at least some of it is either pointless or unnecessary.
posted by lapis at 8:10 AM on May 16, 2023 [13 favorites]



It would help if the site (jessamyn, moderators) could start clearing up the purpose and function of MeTa, or at least start having those conversations. Because right now it feels like everyone's feeling bruised and battered and I suspect at least some of it is either pointless or unnecessary.

Perfectly said, lapis. At the moment, people are speaking past each other here because we've got fundamentally different ideas about what the purpose (and implications) of the conversation is.

Is it like a quick, polite note om the neighbourhood noticeboard, for those who happen to see it, asking them to think about something they might otherwise not have?

Or is it standing up at a meeting and proposing something to the decision makers, who will then decide how to act?

Or some other thing?
posted by Zumbador at 9:46 AM on May 16, 2023 [4 favorites]


Agreed.

That said, even ideas that were, more or less, agreed upon communally in Meta a while back, even with moderator input, still aren't hard and fast rules. Posts still often link to news and other sites without pointing out the domain, don't always provide archive links, don't always provide original links if they're providing an archive link, don't always provide content warnings, and so on. I think there've been some decisions or agreements in Meta that moved some needle somewhat, but it's rare for the effect to go farther than that. Which isn't always a bad thing - this isn't a very absolutist place in aggregate (absolutist individual takes notwithstanding!)
posted by trig at 11:27 AM on May 16, 2023 [2 favorites]


Everyone is getting very out-of-sorts about a simple request. Which is, to be clear, as follows:
if the link is in the FPP text, it is easily clickable without seeing the MetaFilter tags that are inside the post so i myself, and perhaps others too, would greatly appreciate if people making links to these sorts of things on the front page place a little bit of warning text, content warning, trigger warning, or the like around links to things that directly interact with AI constructs. (Emphasis added)
The very first (standing) response on this post met the brief by suggesting that:
In the same way that folks used to add [SLYT] to "Single Links to YouTube," I would be very happy to see [AI] after a link to an AI-driven web site.
And that solves the request! Just like before the site added the video icon to youtube links, having "[AI]" or "[LLM]" or something similar is enough of a heads up about what this content is to allow people to have a heads up about it. There is no ask that this be enforced by moderators; there is no ask that this be automated: Just a quick note about where it's from that takes five or six extra keystrokes.

The AI/LLM stuff doesn't offend me, specifically, but I also don't really find AI-generated content to be all that interesting so in general I'm not going to click on it. For those of you who aren't into it, think about it this way: it will sharply decrease the number of comments in your thread complaining about how dumb AI content is.

And meanwhile, everyone should probably remember that the "SYLT" thing was controversial at the time too.
posted by thecaddy at 11:47 AM on May 16, 2023 [3 favorites]


IMO, the spirited discussion generated by this simple request is more about some users’ thoughtful concerns about the site as a whole than the request itself.

I thought posts above by mstokes650, cupcakeninja, warriorqueen, and lapis all raised important issues re balancing the responsibilities of speakers and listeners in a way that respects everyone’s interests while fostering communication, and would be interested in others’ thoughts on those issues.
posted by lumpy at 12:48 PM on May 16, 2023 [9 favorites]


I wasn't going to bring this up again but since we're still going, I'm really still unclear on what gets a [AI] or [LLM] tag.

That's not rhetoric. I just don't get it. I actually do not see a fundamental difference between running a Google search and reading what the algorithm served me and typing words into a chatGPT-driven window and reading the results. I *do* see a problem with understanding whether the information is any good, and I see a *ton* of problems with like, artistic and intellectual property issues. But I'm having trouble parsing this request.

It's like, saying something links to YouTube is obvious to me. But no one's asked me to identify links if they're in a particular aspect ratio or if the site is running on Drupal.

Like, I was thinking about this on my run this morning - I was listening to a MurderBot audio book and then I was considering Blade Runner and chatGPT and I just...don't understand how the goblin.tools somehow crosses over into interaction vs. reading. It's not talking back to me or calling me or dragging me out of the way of a missile.

Like chat bots on a customer service site - is that an AI/LLM interaction? If I'm linking to an article that has a chat bot available does that count?

Like, I 100% accept that this is some people's experience, but I don't actually know how to evaluate it. To me this is way more complicated than identifying where a link is going by the name of the site/publisher. Maybe I just haven't grasped it yet which is likely because I haven't tackled understanding it too deeply yet.
posted by warriorqueen at 12:52 PM on May 16, 2023 [16 favorites]


And that solves the request!

As has been pointed out, the request was to consider it.

Considering the request, and then deciding that it seems impractical, undesirable, adds unnecessary complexity to posting, or just unnecessary also resolves the request. People who think it is not a good idea are free to express their opinion. There's nothing wrong with that, and simply disagreeing is not disrespectful to the person making the request.
posted by snofoam at 1:01 PM on May 16, 2023 [15 favorites]


I just don't get it. I actually do not see a fundamental difference between running a Google search and reading what the algorithm served me and typing words into a chatGPT-driven window and reading the results. I *do* see a problem with understanding whether the information is any good, and I see a *ton* of problems with like, artistic and intellectual property issues. But I'm having trouble parsing this request.

I agree that this line is pretty blurry! It's going to be driven by the poster's own experiences and judgment. My own gut feeling is that the line is, "Am I posting this to show off something that is built by, or primarily uses, generative AI as its reason for being?"

Goblin.tools crosses that threshold for me; so would a gallery or video that uses Midjourney or a similar tool to build imagery. A link to the NYT article about how LLMs work in the context of writing shows would be right on the line and I'd probably tag it. A link to an article that happens to have a LLM chatbot running on that page would not, assuming that the point of the link was not to show off the chatbot. Algorithms do often determine the content that we see--but when we're posting links to YouTube or Twitter or other social media sites, it's the human-generated content that we're (usually) linking to, not the algorithm itself.

This is also a little more expansive than the original request (which is about active generation/interaction rather than content), but is a decent balance and I think a slightly brighter line than just "interaction". I am all for putting some of the responsibility of polite speech back on the listener rather than the speaker--but I'm also in favor of setting context ahead of time so that people aren't surprised.

Considering the request, and then deciding that it seems impractical, undesirable, adds unnecessary complexity to posting, or just unnecessary also resolves the request. People who think it is not a good idea are free to express their opinion. There's nothing wrong with that, and simply disagreeing is not disrespectful to the person making the request.

This is absolutely fair. Like I said, I don't want this to be enforced by the mods or the site software. I absolutely don't want people in threads on the blue or the grey excoriating posters for not adding the flag. The ideal is that some people will start doing this on the blue, and maybe this will become a community norm like tagging youtube and twitter links. Maybe it won't!

The irony is that most of us who would add this flag are also unlikely to post links to sites driven by LLMs, so it probably won't get to norm status. But we're also not going to set that norm by consensus in this conversation--that's not the way Metafilter works, though it often feels like it is. All we can really do here is offer some ideas and then see what sticks.
posted by thecaddy at 2:03 PM on May 16, 2023 [2 favorites]


I have to ask - why would an article about how LLMs work warrant a warning? This is where I keep getting tripped up - a request to tag links where the expectation is that you'll feed information to an LLM is one thing, a request to tag links where you might *look at* the output of an LLM is another (although in my personal opinion it's sort of like wanting a warning that you might see an image that has at some point been through Photoshop), but tagging links where a human has written about LLMs? What, exactly, are we attempting to avoid, there?
posted by sagc at 2:10 PM on May 16, 2023 [7 favorites]


I don't think that it makes a terribly huge difference to add one more minor consideration to the posting process.

In this specific case, what's the worst that happens by doing so? Maybe it encourages an unhealthy and irrational technophobia in some to see AI warnings attached to posts, but I get the impression that most people that don't want to engage with the stuff here see it as more of a nuisance than an existential threat.

In general, posting here is already daunting enough that this seems like a relatively small ask of the uncommonly considerate subset of users here that actually submit FPPs.
posted by otsebyatina at 2:30 PM on May 16, 2023 [1 favorite]


tagging links where a human has written about LLMs?

Ah, sorry I didn’t make that clear! The article is an interactive explainer about how LLMs work that includes generative text based on six different types of sources, and uses a model called Baby GPT.

It was just over the line of what I might tag, as an example—and also the kind of thing I might actually post about AIs and LLMs.

(I’m backing away now, thanks!)
posted by thecaddy at 3:42 PM on May 16, 2023


I think this is classic Mefi in that we've overcomplicated the whole situation. I'm not really invested in this emotionally, but I haven't found AI generated content to be that interesting so I don't go out of my way to view it.

This FPP is a great example. I'd be pretty interested in talented modern painters working in a classical style, and I know that Star Wars is a great way to get more mainstream attention. Oh, its all AI generated art? I'm not that interested, but depending on how busy I am later maybe I'll still click through.

I thought the [AI] flag helped clarify the content in a simple, frictionless way that was useful to me even though I don't really have a strong opinion.
posted by kittensofthenight at 3:59 PM on May 16, 2023 [11 favorites]


Seems like an a disturbingly outsized and frankly anti-community stance to take on a post that was explicitly made to “respectfully ask the community to please consider” an action.

When some members of the community are so nasty in response to a nice request, I hate to see it. And this is a good example. Now I'm "anti community" because I don't like how rude these MeTas go. Sure. Yeah.
posted by tiny frying pan at 5:50 AM on May 17, 2023 [2 favorites]


I would like Metafilter management to discuss not letting the community weigh in
posted by oulipian at 6:24 AM on May 17, 2023 [7 favorites]


A brand new thing.

Just cut them up like regular chickens.
posted by flabdablet at 6:45 AM on May 17, 2023 [3 favorites]


It was "change the sheets," not "make the bed" where I got the fun advice to take the dirty sheets off the bed, fold them, and put them into the linen closet. AI is not currently qualified to deliver content about dirty sheets, and I want to avoid helping it get better at seeming to be qualified. In the same way I after the fact didn't want to have contributed--by filling out forty Facebook quizzes about what mixed drink or Spice Girl best matched my personality--to the whole Cambridge Analytica Facebook nightmare that taught the Russian bots how to gin up January 6th for us? In that same way I don't want to inadvertently contribute to making this awful thing good at what it's in the end going to be used to do, namely sell me and my fellow morons world-ending bullshit.
posted by Don Pepino at 7:18 AM on May 17, 2023 [1 favorite]


I want to specify that I am not pro-AI or pro-chatgtp or whatever, and I am deeply concerned about how the tech may be used to manipulate people's vulnerabilities. I also think that piling up proposals for "one more little thing" to think about ever time someone posts here has gotten completely unwieldy.

I'm not pushing against it because I am pro-AI. I'm pushing against it because there are many many things in this world that I dislike and I think are bad for humans, and so I skip articles and posts about them. One of the things I think is bad for us is an assumption that communities need to conform exactly to our preferences at all times with no conflict or friction about things that aren't societal oppression but just disagreements.
posted by lapis at 7:36 AM on May 17, 2023 [33 favorites]


nice request

I don't think it was a nice request. I think someone had their hobbyhorse triggered and decided their feelings were close enough to trauma to complain about it here.

Not enjoying a FPP, finding some particular angle you enjoyed the least, and then opening a metatalk to explain that you never want to not enjoy that angle so much ever again that could people tag FPPs better. WTF?
posted by Wood at 8:57 AM on May 17, 2023 [15 favorites]


I'm pushing against it because there are many many things in this world that I dislike and I think are bad for humans, and so I skip articles and posts about them.

The OP has requested that the information necessary to skip those articles and posts is provided up front.
posted by Etrigan at 9:13 AM on May 17, 2023 [2 favorites]


there are many many things in this world that I dislike and I think are bad for humans, and so I skip articles and posts about them.
And making articles and posts about disliked things easier to skip is bad how? I don't get why this is a problem at all. I click a whole bunch of the SLYTs I see, because nothing's better to waste onerous time than a YT, so it works the other way, too. It just SAYS what the fargin thing IS, what could be better for all involved? You don't gotta go, "I am so sorry for posting this sorry to humanity for I am a bad one," you can just go "AI" or "SLAI" (nice: slay), and then people who're not freaked to the gills about it or are interested in the phenomenon will flock to the post and people who are in a big luddite pet about it or just find it boring will move along quietly. It would make the site work better for all the people. Why is everybody incandescent with rage?
posted by Don Pepino at 9:14 AM on May 17, 2023 [5 favorites]


I don't think it was a nice request. I think someone had their hobbyhorse triggered and decided their feelings were close enough to trauma to complain about it here.

"i would like to respectfully ask the community to please consider placing some textual warnings in FPP text" is a complaint and not a request?
posted by Etrigan at 9:14 AM on May 17, 2023


Because of this bonkers overreaction:

i am fairly upset and i hope to never experience this interaction again,
posted by fluttering hellfire at 9:16 AM on May 17, 2023 [6 favorites]


i am fairly upset and i hope to never experience this interaction again,
OP is simply describing their mindstate. This is helpful to the community. Others in the community may be in a similar state of mind and may appreciate knowing they are not alone. Still others may not have known this attitude toward AI was a potential attitude toward AI. All in the community now know more things! Just marvel at all the learning, the growing, the connections! Just a few of the infinite charms inherent in communication between humans, which is still possible even now, for a while, anyway...
posted by Don Pepino at 9:28 AM on May 17, 2023 [4 favorites]


Because of this bonkers overreaction:

i am fairly upset and i hope to never experience this interaction again,


Note that the OP is describing their own emotions. They are not attributing evil intent to other posters or otherwise pretending to know other people's motivations. And they immediately follow that with "which is why i ask the community to please consider this request."

This was the gentlest possible way to bring this sort of issue in front of the community, and it's characterized as "bonkers overreaction" and complaining about a "hobbyhorse" and "oppression".
posted by Etrigan at 9:30 AM on May 17, 2023 [2 favorites]


I dunno man. I have a bunch of very weird and specific triggers and it would never occur to me to bring them to the community even though there are indeed things posted here that occasionally knock me off my game for a bit.

(One weird example is the demon catshark; I'm pretty sure not too many people were first taken to see the Exorcist at a young age as a means of religious control and then molested while fishing for catfish but I was, plus I also was taken to see Jaws (that one was accidental; I was supposed to be asleep at the drive-in) at about the same age so it was incredibly weird to see "demon catshark" one morning and thinking about the catfishing incident preoccupied my morning drive. I'm not sure that's a bona fide trigger, even in the post-therapy way, but I've had 'didcha know there are demon catsharks' in my brain a LOT this week.)

I manage that lots of ways, including choosing when I participate.

I do think, thanks to this discussion, that I see some other ways the [AI] tag is useful so that's good.

I still agree with lapis 100% - all these small things really do add up for me around posting, besides just the way MetaFilter has expectations around ownership of those posts (quality, knowing if the author is a poop milkshake, etc.), and I suspect both from the user survey and from declining participation that it's not just me.
posted by warriorqueen at 9:49 AM on May 17, 2023 [9 favorites]


I manage that lots of ways, including choosing when I participate.

You were able to choose to participate because the thing that raised your hackles was spelled out in the post. The OP is requesting the same consideration. They are not requesting that no mention of AI ever be made. They made one comment in the main post and then asked for this consideration here. They haven't piled on nor encouraged a pile-on. They are the ones being piled on here, accused of all manner of ridiculous motives because they dared to make a request in the nicest way imaginable.

Again, I don't mind if you don't ever intend to put an "[AI]" tag when you link something. What I am decrying here is the frankly mean way that some members of this community have responded. I think that that is liable to drive away far more people than the original request.
posted by Etrigan at 10:06 AM on May 17, 2023 [8 favorites]


You were able to choose to participate because the thing that raised your hackles was spelled out in the post.

No, that's not true - the title was the triggering part. That's my point.

I haven't seen a lot of meanness - there has been some, but I really resent that people who have competing concerns are lumped into that. If people want to say that a response is mean, that's fine, but simply disagreeing is not mean.
posted by warriorqueen at 10:10 AM on May 17, 2023 [14 favorites]


Take my phrase "skip the post" to include "click on the link, decide it's not worthwhile, close the linked site, and go about my merry way," please. I don't think clicking open something I realize I don't like is an unreasonable thing to have happen during my day.
posted by lapis at 10:13 AM on May 17, 2023 [4 favorites]


Hey, can we not use words like "bonkers overreaction" and "had their hobby horse triggered" when someone says they found something to be really upsetting ?

You might not understand that reaction, but you don't have to understand, or even agree, in order to have compassion and not write about them in such a contemptuous way.
posted by Zumbador at 10:29 AM on May 17, 2023 [12 favorites]


You were able to choose to participate because the thing that raised your hackles was spelled out in the post.

No, that's not true - the title was the triggering part. That's my point.


Then there's no way to post that story without potentially triggering some people. But aren't you glad it wasn't just "Hey, here's a weird thing" and had no mention of a thing that may have triggered you? That's what the OP here is asking for.

I haven't seen a lot of meanness - there has been some, but I really resent that people who have competing concerns are lumped into that. If people want to say that a response is mean, that's fine, but simply disagreeing is not mean.

I have been saying, over and over, that I don't care whether you disagree. I care that people -- not you, but you're responding to my responses to them, so I have to talk about them as well -- are disagreeing contemptuously, that they are saying that they refuse to consider the request. As I noted in the comment you appeared to be replying to, "it's characterized as 'bonkers overreaction' and complaining about a 'hobbyhorse' and 'oppression'." I said that some members of the community have been reacting meanly. If you feel I was lumping you in with them, well, it's pretty obvious I can't change your mind.
posted by Etrigan at 10:32 AM on May 17, 2023


that they are saying that they refuse to consider the request

I don’t see how this is even possible. Anyone that’s read this post and responded to it has considered it, surely? To conflate “considered, but does not support” with “refused to consider” is dishonest at best.
posted by not just everyday big moggies at 10:45 AM on May 17, 2023 [11 favorites]


One of the major points made in and around mefi about Mastodon is that choosing and expanding the expectations about content warnings isn't zero sum. That "please start marking the unmarked conversations about X you've been having with a tag XXX" is far from a neutral or implicitly benign act.

I keep hearing meanness over and over again. Consider away, no doubt, but there isn't a benign wounded victim side and then a big bad mean side.

Is everyone else just reading between the lines to try to understand what the original problem actually is? glonius keming does't owe me anything, but is this reaction with no expansion really the gentlest possible way to bring this concern forward?

It's unclear to me what the actual problem is here and I guess I get this vibe, of: it doesn't matter, it would be rude to ask and can we just toss another keyword on the pile of nice requests. So, if you might want to talk about AI please include a special warning.
posted by Wood at 10:46 AM on May 17, 2023 [4 favorites]


What the hell is everyone even arguing over at this point of the thread? Goodness.
posted by Jarcat at 10:49 AM on May 17, 2023 [4 favorites]


If you feel I was lumping you in with them, well, it's pretty obvious I can't change your mind.

I wasn't sure, so thanks for clarifying.
posted by warriorqueen at 11:37 AM on May 17, 2023


The fact that moderators and owners seem to have stepped away from MeTa a long time ago further confuses things.

With kindness, I am still here. I read all the threads. However, I think having moderators step in and say "Thanks for your feedback, this is what we're going to be doing about this" in a conversation where there is a specific ask of the community doesn't make sense. And moderator response to MeTa threads is going to be highly dependent on what is being asked for in those threads.
posted by jessamyn (staff) at 5:02 PM on May 17, 2023 [4 favorites]


Should we be interpreting all requests "of the community" as meaning there's never going to be site policy or moderator action taken based on that conversation, then?
posted by lapis at 5:37 PM on May 17, 2023 [2 favorites]


Because it seems like the decision-making model for MetaTalk, and MetaFilter as a whole, is really opaque, which makes figuring out the purpose of these discussions, and the appropriate level of feedback, really difficult.
posted by lapis at 5:41 PM on May 17, 2023 [2 favorites]


This FPP is a great example.

Hi! I posted that FPP as a test after reading this thread up to the time when I posted. I thought the video was interesting and had one standout image, it fit in the theme of "small posts that might brighten someone's day", and I was curious about how tagging the thread with [AI] would go. I probably could have made a stronger post with more research effort but I've posted SLYTs in the past and gotten an OK response, so it was kind of a comparison to similar posts I've made since I re-upped.

My hope was that using the [AI] inline tagging was that readers who were interested/didn't care about using AI would enjoy it but that users who weren't interested would skip it in favor of another post. Instead, about half the comments I got in the first 24 hours were about AI/fair use/etc. and most of them were honestly kind of axe-grindy or answering to axe-grinding though there was one I also thought was pretty good. The other half of the comments engaged with the content of the video. I also noticed that it got fewer favorites than other posts I've made along similar lines, which is not the point, but ... also a data point about how the community deals with AI.

As a site member and occasional poster, my takeaway is that the tag is useful to some readers who would like to avoid AI-related content, but as a poster, I found the tag, or maybe the kind of post that should be tagged with it, attracted lower-quality interaction than other similar posts without the tag I've made. I personally don't find adding "[AI]" a big deal any more than adding "[SLYT]" which I find useful. At the same time, I don't like attracting fighty people to my "here is a small thing that improved my day" one-off posts, so there is a downside for me that I found significant enough that it made me less likely to post not just AI-generated art but honestly anything at all to the blue.

It also made me feel like there's going to be fightiness one way or the other (from people looking to fight about this topic if tagged; from people who are angry about the lack of tagging if not) about any AI post. I am personally indifferent to AI as a post subject, whether as AI art or AI-chat tool. But I do feel like between this request and the responses to various posts on the blue that there's a definite, if not coordinated, effort to make AI-related posts not worth the hassle and thereby get people to stop posting them.

Net result: my epigrams are sad today.

Since this thread is also pretty fighty and full of grar, I am also checking out to unharsh my mellow. But I wanted to report my own data because I thought people interested enough to post and read about this topic might be interested in this experiment and its outcome. I hope we can find a way to make the community happy around AI posts.
posted by gentlyepigrams at 5:56 PM on May 17, 2023 [23 favorites]


Elsewhere on MetaFlutter:

On the blue: Can We Stop Runaway A.I.?, where MeFites are arguing over the singularity et al.

On the beige (I have to get my monitor fixed): [MeFi Site Update] May 17th - "... Unless there are strong objections, we are planning to add a line against AI generated comments in the Guidelines ..."
posted by Wordshore at 12:49 AM on May 18, 2023 [2 favorites]


. At the same time, I don't like attracting fighty people to my "here is a small thing that improved my day" one-off posts, so there is a downside for me that I found significant enough that it made me less likely to post not just AI-generated art but honestly anything at all to the blue.

[...]

But I do feel like between this request and the responses to various posts on the blue that there's a definite, if not coordinated, effort to make AI-related posts not worth the hassle and thereby get people to stop posting them.


This isn't a response specifically @ you, gentlyepigrams -- thank you for the FPP, which nicely illustrates what I've been suggesting -- but if this is really something that's going to result in people purposefully not tagging their FPPs (as opposed to simply not wanting to tag it for whatever moral reason), then I would consider it the height of hypocrisy, in deciding that users who don't want to interact with AI/LLM-anything should have to interact with it, but should consider their negative response unhelpful/unwanted. As has been pointed out in this thread, negativity and disagreement counts as a response. It might not be the desired response, but it is a valid response. (And in any case, as soon as someone figures out it's AI/LLM and comments about it, the argument is going to happen anyway -- possibly more so, given that some users may feel "tricked" into viewing it.)

Maybe the conclusion should be that the community is in a volatile place WRT this topic, for many valid reasons on both sides, and posters should expect that if they're putting out a FPP with this content. For what it's worth, I'm seeing the exact same amount of fightyness and grar in other communities I'm in which are also struggling with how to use and discuss AI content. This isn't unique to Metafilter, it's happening all over the internet.
posted by fight or flight at 4:19 AM on May 18, 2023 [4 favorites]


Should we be interpreting all requests "of the community" as meaning there's never going to be site policy or moderator action taken based on that conversation, then?

No. But things evolve over time. When people have a question about how the community feels about a thing or are making a community request, the mods are listening but not necessarily making a pronouncement or a decision right off the bat. As we've said repeatedly here, the goal state is for the community to be able to make many decisions about the general things they'd like to see and not see here in terms of moderation (outside of the Guidelines and Content Policy and feedback given by the BIPOC Board and a few toher trust/safety type things) and have the moderation team be able to implement those things. The Steering Committee has helped with that, we've had a setback in not being able to work with them.

This discussion about AI is an early days conversation for this topic in a community which is nearly 25 years old. Sometimes the community coalesces around a strategy earlier on in the MeTa discussion process and other times it doesn't. This time it doesn't seem to be, at least not yet.
posted by jessamyn (staff) at 7:17 AM on May 18, 2023 [4 favorites]


So again, I am trying to get a sense of how the mods and owner view the purpose of discussion in MeTa, and therefore how users should be engaging. It sounds like you're using it to gauge whether the members as a whole have consensus? Or for members to come to decisions? (Which is impossible in this format, but that's another discussion.) In which case, responses of, "No, that doesn't make sense to me," are valuable contributions here, and people who are against a suggestion should be speaking up. Correct?
posted by lapis at 7:24 AM on May 18, 2023 [1 favorite]


Correct. However people who do not like a suggestion should also be doing their best to be mindful and respectful of the fact that it's hard to bring a topic to MeTa and sometimes hard to receive criticism and so should try to give their feedback, even if negative, in as constructive and kind a manner as possible.
posted by jessamyn (staff) at 7:34 AM on May 18, 2023 [5 favorites]


And then when are things raised up to "this is a site policy decision"? What are the criteria for deciding that, and who's making the final decision?
posted by lapis at 8:34 AM on May 18, 2023 [1 favorite]


For me, Meta is many things and many situations ... including the closest thing Metafilter has to a water cooler -- that place where all (with keys to the facility) can informally hang out and discuss this-that-other-things. Sometimes it's all good fun. Other times, it's anything but. As for this particular discussion, it seems to be an inevitable back and forth (and here and there) concerning a fresh new cultural confusion driven by new tech and how it's colliding with our various worldviews and comfort levels. Yes, it happened to start with a rather emotional request (not a demand) ... but maybe that's just how it goes sometimes. Sometimes somebody at the water cooler doesn't think to filter their emotions because they are indeed "fairly upset" and so it shows up in their tone.

And then, others respond, sometimes emotionally themselves -- they agree, disagree, fire back, extrapolate and as Jessamyn just put it ...

Sometimes the community coalesces around a strategy earlier on in the MeTa discussion process and other times it doesn't. This time it doesn't seem to be, at least not yet.

So, I suppose, the discussion continues ... and not just around the water cooler. We will be taking it home with us, or across the street to the pub or whatever. And at some point, we'll either achieve something approaching consensus or some functional state of agreeing-to-disagree or ....

And then when are things raised up to "this is a site policy decision"? What are the criteria for deciding that, and who's making the final decision?

whoever's got the power/responsibility one would imagine.
posted by philip-random at 9:07 AM on May 18, 2023


I hopped in and caught up because mstokes650's comment popped up on the Popular Comments feed. It's a good comment and a lot about this discussion is interesting to me!

I would love to split a hair between "tag that announces a type of content" and "tag that serves as a content warning." While I find a lot of AI discourse exasperating and occasionally underwhelming, I don't need a tag to tell me that I might be bothered by a post. What I do like about [SLYT], on the other hand, is that, if I'm looking for text content, I like knowing which things are videos.

Similarly, some things which sound very interesting to me become far duller if the real thing going on is "I asked ChatGPT to do something for me." That's not for any ethical or philosophical reason: it's simply a matter of what things intrigue me. Goblin Tools, for instance—to take an understandably very popular post!—lost my interest when I realized it was a series of AI-driven tools. The idea of crafting tools to help neurodivergent people with tasks was really interesting to me, but primarily because I like seeing what person-driven techniques make a genuine difference for people; I totally get why people find the AI solutions interesting and even helpful, but for me personally, knowing that the underlying thing happening was AI would've been nice.

I don't feel the need to demand an AI tag, or to have this be a tag at all, but mentioning that something is AI is a useful and convenient framing sometimes. That has nothing to do, again, with "the politics of AI disgust me" (even if they kinda do) or "the AI hype is not really doing much for me" (even if it doesn't). It's just that, oftentimes, I like knowing what I'm clicking on, and specifying what things are AI-related and what things aren't helps do away with mystery meat, a little.
posted by Tom Hanks Cannot Be Trusted at 9:16 AM on May 18, 2023 [10 favorites]


whoever's got the power/responsibility one would imagine.

Right. That's what I'm asking. Does jessamyn decide, based on input here? Is jessamyn asking us to decide, based on conversation here? Do the moderators plus jessamyn decide? Do they decide some things and the conversation in MeTa is considered a decision in other cases? How do we know when a decision is made? How do we know which decision-making rules are in play in any given discussion?
posted by lapis at 9:16 AM on May 18, 2023 [1 favorite]


And I'm not pushing for particular answers to any of those questions, just for clarity on the process. Because it seems like a great deal of the heat in this thread was due to disagreement about what sort of discussion we were having. It seems like that could be avoided, or at least greatly reduced, if everyone were clearer on the decision-making processes here.
posted by lapis at 9:19 AM on May 18, 2023 [5 favorites]


And then when are things raised up to "this is a site policy decision"? What are the criteria for deciding that, and who's making the final decision?

When it seems like there is consensus or urgency, usually. Or when it's time for a rewrite of our values documents in which case we'd be looking for community input.

I don't want to pass the buck here but this was a thing that the Steering Committee was going to help hammer out and now can't. I was not expecting to be in this role and, as I've said repeatedly, would like this to be a community-run site. However, we're in a messy space where we're trying to figure out how it can be that; anything decided in this weird in-between time is going to be somewhat ad hoc. I'm aware this is an unsatisfying answer, but it is honest. Any MeTa asking for community input is going to have some members pushing for a moderator-and-policy led solution and some members pushing for a community-driven solution.

You are welcome to email me directly if you would like to talk more about this at length. I think it's diverging from the original purpose of the thread and I'm going to step away.
posted by jessamyn (staff) at 10:13 AM on May 18, 2023 [25 favorites]


The Washington Post tool says Metafilter has 1.3M Tokens.
posted by unliteral at 12:02 AM on May 19, 2023


Would we know if an LLM joined Metafilter and started commenting/posting?
posted by Thorzdad at 7:39 PM on May 23, 2023


Unless they've made some fairly dramatic improvements in the last few weeks, in all likelihood yes.

(Also, a large language model can't join metafilter. It's not a person. It has no agency. It especially doesn't have five bucks to pay for the account. A person could join metafilter, and set up a system to generate comments which get posted from that account. LLMs aren't general AI, they can't "do" stuff.)
posted by Dysk at 11:50 PM on May 23, 2023 [7 favorites]


Yeah, I know all that, Dysk. I assumed it was obvious to all that someone would have to pay the $5 and set it all up beforehand.
posted by Thorzdad at 5:41 AM on May 24, 2023


Would we know if an LLM joined Metafilter and started commenting/posting?

Ok, I just was just curious and went to ChatGPT and said "Please compose an answer to the following question posted on ask.metafilter.com [full text including more inside of the current top question]."

I won't post the answer here, but suffice it to say, yes we would know. The answer is probably correct/useful info and the language is human/correct, I guess, but the tone/form/structure of the answer is all wrong. I then added "that doesn't really read like an answer to an ask.metafilter.com question. Can you edit it so the tone, form, structure, language etc. read more like an ask.me answer?"

The response was not any better, though it was kind of more amusing.
posted by If only I had a penguin... at 5:43 AM on May 24, 2023


I assumed it was obvious to all that someone would have to pay the $5 and set it all up beforehand.

But then in a very real sense it would not be an LLM joining mefi or participating in threads here. It would be a person joining (or setting up a sock) and using an LLM to generate text, with which the person would participate in the threads.

The are enough people who think that LLMs are functionally GAI that it's worth making these distinctions, in my opinion.
posted by Dysk at 8:48 AM on May 24, 2023 [9 favorites]


+1
posted by amtho at 8:11 PM on May 24, 2023


« Older Best way to link to Instagram posts in Metafilter?   |   CQ CQ Metafilter Hams? Newer »

You are not logged in, either login or create an account to post comments