[MeFi Site Update] June 21st June 21, 2023 1:31 PM Subscribe
Hi there, MetaFilter!
Happy Solstice and welcome to your monthly Site Update! The last update can be found here. You’ll find some updates regarding the site below. I’m looking forward to your feedback and questions.
Reminder: I will be the only mod monitoring this thread so please be patient as I reply to your feedback and questions.
Admin
– As budgeted with the SC earlier this year, we are planning to hire someone to help frimble with some specific projects including completing the flagging UI changes and then branch out from there if everything goes well.
– Discussion and consultation with experts about a possible path for the site to have community governance is ongoing and continues to be slower than we’d prefer. We’ll make an announcement when we have specific news.
– EM has officially moved on from doing on-call shifts and is now marked as retired
Moderation
- Changes to the FAQ have been happening and are still ongoing. Brandon has submitted various suggestions for changes and I will approve them later this week.
- The Community Guidelines, Microaggressions and Content Policy are still being reviewed to add, specifically, clauses about AI generated comments, ongoing disputes in threads as well as being more explicit about Ageism, Transphobia, regional and other forms of discrimination. I will share the drafts in a separate thread as soon as it is ready.
- We’ve created a starter FAQ entry about the use of ChatGPT or other AI-like tools. Once we wordsmith this we will add it to the Content Policy.
- At paduasoy’s suggestion, we are now adding the tag "-sidebar-" and leaving a mod note in posts that have been featured in Best Of / sidebar.
Technical changes
- Frimble has made some important changes in the flagging UI in anticipation of some larger changes. The flagging code was inconsistent across the various MeFi themes and subsites and is now consistent so it can be changed across the site more easily without having to make 100 individual changes. Next step is making actually-visible UI changes.
- Frimble is looking into the RSS feeds to make sure we have the HTML in them properly dealt with.
BIPOC Advisory Board
- All Board Meeting Minutes have been updated and are available on the BIPOC Board page and the last two sets of minutes are in their own MeTa posts. We’re working with the BIPOC Board to make updates and minutes easier to work with.
If you have any questions or feedback not related to this particular update, please Contact Us instead. If you want to discuss a particular subject not covered here with the community, you’re welcome to open a separate MetaTalk thread for it.
Happy Solstice and welcome to your monthly Site Update! The last update can be found here. You’ll find some updates regarding the site below. I’m looking forward to your feedback and questions.
Reminder: I will be the only mod monitoring this thread so please be patient as I reply to your feedback and questions.
Admin
– As budgeted with the SC earlier this year, we are planning to hire someone to help frimble with some specific projects including completing the flagging UI changes and then branch out from there if everything goes well.
– Discussion and consultation with experts about a possible path for the site to have community governance is ongoing and continues to be slower than we’d prefer. We’ll make an announcement when we have specific news.
– EM has officially moved on from doing on-call shifts and is now marked as retired
Moderation
- Changes to the FAQ have been happening and are still ongoing. Brandon has submitted various suggestions for changes and I will approve them later this week.
- The Community Guidelines, Microaggressions and Content Policy are still being reviewed to add, specifically, clauses about AI generated comments, ongoing disputes in threads as well as being more explicit about Ageism, Transphobia, regional and other forms of discrimination. I will share the drafts in a separate thread as soon as it is ready.
- We’ve created a starter FAQ entry about the use of ChatGPT or other AI-like tools. Once we wordsmith this we will add it to the Content Policy.
- At paduasoy’s suggestion, we are now adding the tag "-sidebar-" and leaving a mod note in posts that have been featured in Best Of / sidebar.
Technical changes
- Frimble has made some important changes in the flagging UI in anticipation of some larger changes. The flagging code was inconsistent across the various MeFi themes and subsites and is now consistent so it can be changed across the site more easily without having to make 100 individual changes. Next step is making actually-visible UI changes.
- Frimble is looking into the RSS feeds to make sure we have the HTML in them properly dealt with.
BIPOC Advisory Board
- All Board Meeting Minutes have been updated and are available on the BIPOC Board page and the last two sets of minutes are in their own MeTa posts. We’re working with the BIPOC Board to make updates and minutes easier to work with.
If you have any questions or feedback not related to this particular update, please Contact Us instead. If you want to discuss a particular subject not covered here with the community, you’re welcome to open a separate MetaTalk thread for it.
I suggest this needs to be stronger:
"Using ChatGPT or other AI-like tools to write answers without explicitly saying you are doing so is discouraged. MetaFilter is, at its core, about knowledge and wisdom shared by its members."
"Discouraged" is going to read to some people as "kinda bad but that's just like our opinion, man."
I don't think you have to go all the way to "not allowed", but such posts should be removed.
posted by zompist at 2:58 PM on June 21, 2023 [19 favorites]
"Using ChatGPT or other AI-like tools to write answers without explicitly saying you are doing so is discouraged. MetaFilter is, at its core, about knowledge and wisdom shared by its members."
"Discouraged" is going to read to some people as "kinda bad but that's just like our opinion, man."
I don't think you have to go all the way to "not allowed", but such posts should be removed.
posted by zompist at 2:58 PM on June 21, 2023 [19 favorites]
Thanks for posting this. I think the ChatGPT stuff does ultimately (as you plan) need to expand, in particular as regards who owns the copyright to AI-generated text, and how that interacts with MetaFilter's default to who owns the rights to comments and posts here. I know it's an evolving situation, but food for thought.
posted by cupcakeninja at 4:51 AM on June 22, 2023 [1 favorite]
posted by cupcakeninja at 4:51 AM on June 22, 2023 [1 favorite]
Suggestions for ChatGPT/LLM policy from someone who is on the pro-AI side:
1) Use of ChatGPT or other Large Language Model-generated text (“generated text”) in AskMetafilter is prohibited, period.
2) Use of generated text in comments outside AskMetafilter is fine, provided it is clearly labeled or obvious from context that you are using generated text.
3) Copyright of user comments does not extend to any generated text portions of those comments.
4) Use of generated text in the body of a post outside AskMetafilter is prohibited unless there is a clear and compelling reason (eg a post about a new ChatGPT feature, which also serves to demonstrate that feature).
Reasons:
#1 is because AskMetafilter answers frequently brush up on areas of life where some familiarity with medical or legal matters is important. All deep-learning systems fundamentally lack the ability to model reality in general and novel situations in particular and should never be used for answering questions on AskMe until that changes, decades from now.
On the AskMe post side answering AI-generated questions is just wasting our members’ time.
#2 & #3 are both drawn from the same principle, which is that what deep learning systems do is not fundamentally different from what humans do when learning or studying. To take an example from a different but related area of deep learning: if you look at 10,000 images of Renaissance Art specifically labeled as such, a large number of neurons with weighted connectivity is formed in your brain and is linked/coupled with other neural structures representing “the Renaissance” in abstract. When Dall-E or Stable Diffusion are trained on a similar set, a correlating if not structurally identical pattern forms, linked with the tokenized text “Renaissance Art.” However, in the latter case “you”, the poster, are not the entity containing that trained network. It exists inside a non-sapient digital pattern with a neuron-like structure, and you are simply reposting the output of that pattern. Until / unless you personally have done something truly transformative with it, you don’t own that output; nobody does.
This aligns with current US Copyright Office policy, as well.
#4 is due to the fact that a lot of members are - for varying reasons that range from extremely justified to simply mistaken - very nervous or upset about the way most LLMs are currently being built (indiscriminate Internet crawls), anxious about job and income loss due to ill-advised replacement of workers with LLMs, or simply find the authorial tone of default ChatGPT aggravating to read.
Happy to take this to another thread if this isn’t the right place.
posted by Ryvar at 11:26 AM on June 22, 2023 [17 favorites]
1) Use of ChatGPT or other Large Language Model-generated text (“generated text”) in AskMetafilter is prohibited, period.
2) Use of generated text in comments outside AskMetafilter is fine, provided it is clearly labeled or obvious from context that you are using generated text.
3) Copyright of user comments does not extend to any generated text portions of those comments.
4) Use of generated text in the body of a post outside AskMetafilter is prohibited unless there is a clear and compelling reason (eg a post about a new ChatGPT feature, which also serves to demonstrate that feature).
Reasons:
#1 is because AskMetafilter answers frequently brush up on areas of life where some familiarity with medical or legal matters is important. All deep-learning systems fundamentally lack the ability to model reality in general and novel situations in particular and should never be used for answering questions on AskMe until that changes, decades from now.
On the AskMe post side answering AI-generated questions is just wasting our members’ time.
#2 & #3 are both drawn from the same principle, which is that what deep learning systems do is not fundamentally different from what humans do when learning or studying. To take an example from a different but related area of deep learning: if you look at 10,000 images of Renaissance Art specifically labeled as such, a large number of neurons with weighted connectivity is formed in your brain and is linked/coupled with other neural structures representing “the Renaissance” in abstract. When Dall-E or Stable Diffusion are trained on a similar set, a correlating if not structurally identical pattern forms, linked with the tokenized text “Renaissance Art.” However, in the latter case “you”, the poster, are not the entity containing that trained network. It exists inside a non-sapient digital pattern with a neuron-like structure, and you are simply reposting the output of that pattern. Until / unless you personally have done something truly transformative with it, you don’t own that output; nobody does.
This aligns with current US Copyright Office policy, as well.
#4 is due to the fact that a lot of members are - for varying reasons that range from extremely justified to simply mistaken - very nervous or upset about the way most LLMs are currently being built (indiscriminate Internet crawls), anxious about job and income loss due to ill-advised replacement of workers with LLMs, or simply find the authorial tone of default ChatGPT aggravating to read.
Happy to take this to another thread if this isn’t the right place.
posted by Ryvar at 11:26 AM on June 22, 2023 [17 favorites]
Thanks, that's actually helpful. I think part of why we had the word "discouraged" in there is because of what you explicate in #2 without getting all the way to #4, which I'd be in favor of but didn't want to overreach. I'd like people to mostly not be using generated text here unless there is a really good, explicated reason the same way I prefer when people don't paste in huge walls of text from a linked article (short excerpts, fine. Longer ones just feel like "I don't trust you to read the article so I am pasting it here" and are a hassle for people scrolling in small windows).
We can't probably address the copyright issue in any way other than we do already, unless we programmatically differentiate bot-generated text from user-generated text which is a larger step than we can probably make right now, though it does bear paying attention to and mentioning in the FAQ and/or content policy.
posted by jessamyn (staff) at 1:07 PM on June 22, 2023 [2 favorites]
We can't probably address the copyright issue in any way other than we do already, unless we programmatically differentiate bot-generated text from user-generated text which is a larger step than we can probably make right now, though it does bear paying attention to and mentioning in the FAQ and/or content policy.
posted by jessamyn (staff) at 1:07 PM on June 22, 2023 [2 favorites]
Yeah, I can see why “heavily discouraged” might be better for #4. Like, ideally Metafilter posts are the opposite of content mill articles, and a really clever prompt might produce something of interest but outside of “an AI wrote this post” for a new AI developments thread, I’m not seeing a lot of positive utility?
Re: copyright - that’s definitely more of an “if you’re asking, this is the intent” not a “we can detect generated text / do anything about it.” Individual developers have been custom-tuning Alpaca output with LoRAs since Feb or March this year. Detection of anything other than the most utterly vanilla, lazy copy-and-paste from the big LLMs has been a lost cause ever since, and because different LoRAs can be stacked like a …meta-filter, might be forever (a lot of companies are going to try and sell detectors regardless, of course).
Glad it was helpful, even if it’s just an additional reference point.
posted by Ryvar at 2:32 PM on June 22, 2023 [1 favorite]
Re: copyright - that’s definitely more of an “if you’re asking, this is the intent” not a “we can detect generated text / do anything about it.” Individual developers have been custom-tuning Alpaca output with LoRAs since Feb or March this year. Detection of anything other than the most utterly vanilla, lazy copy-and-paste from the big LLMs has been a lost cause ever since, and because different LoRAs can be stacked like a …meta-filter, might be forever (a lot of companies are going to try and sell detectors regardless, of course).
Glad it was helpful, even if it’s just an additional reference point.
posted by Ryvar at 2:32 PM on June 22, 2023 [1 favorite]
I like Ryvar's AI rules, they make sense.
posted by TheophileEscargot at 9:22 PM on June 22, 2023 [4 favorites]
posted by TheophileEscargot at 9:22 PM on June 22, 2023 [4 favorites]
- Frimble has made some important changes in the flagging UI in anticipation of some larger changes. The flagging code was inconsistent across the various MeFi themes and subsites and is now consistent so it can be changed across the site more easily without having to make 100 individual changes.
Hooray for doing the not so fun (I assume) but necessary work that tends to get ignored or pushed till later!
posted by Uncle at 11:52 AM on June 23, 2023 [2 favorites]
Hooray for doing the not so fun (I assume) but necessary work that tends to get ignored or pushed till later!
posted by Uncle at 11:52 AM on June 23, 2023 [2 favorites]
I'm all the way over on the "LLMs are poisoning the web" side... and I can live with Ryvar's suggestions, though I think MeFi will very likely end up tightening 2). But I'm okay with letting it ride for now; I could be wrong.
posted by humbug at 12:33 PM on June 27, 2023 [1 favorite]
posted by humbug at 12:33 PM on June 27, 2023 [1 favorite]
I think this policy failed in the Russia mutiny thread. A ChatGPT generated answer to a factual question (about nuances of Russian vocabulary) was posted and people actually thought it was helpful. Someone said to save complaints until it's wrong.
This is entirely missing the point. And I'm extremely alarmed that people everywhere don't understand the point.
I'm in the really weird position of being caught in the middle of the popular debate about "AI". One camp takes the extreme "it's not doing anything interesting and is just IP theft on a grand scale" and the other camp takes the extreme and credulous view. It's like everyone just projects onto this tech their fears and hopes with apparently close to no understanding of how it works and what it's good for and what it's not good for.
Over the last few months, I've thought of about fifty different ways this tech could be used to do stuff that's not been possible before now. But treating an example like ChatGPT as some oracle that can dispense the world's knowledge is not one of those uses.
While there are already limited integrations of external validation, as-is an LLM is trained indiscriminately on vast quantities of text, independent of truth. For example, if you trained an LLM exclusively on the world's fictional literature, it would perform well on a number of tasks and some of them — many of them — would have a rough correspondence to things that are true in the real world. Such as, say, "how old are children?" or "what happens if you hit a window with a hammer". Others would obviously be untrue. An LLM, as-is, is pretty agnostic about truth.
There's an enormous amount of real-world information implicit in a LLM because there's necessarily an enormous amount of real-world information embedded in language. An LLM really represents relationships within language. But these relationships correspond to facts just about as reliably as language itself does. Which is to say, very approximately.
"Hallucinate" really is a poor word to use for when an LLM makes something up. It's always "making something up". It's inference, which is what it's always doing. It's what we do. But when we do it, we're doing so within a context where our inferences our constantly being tested by experience in the real-world. And we often discover our inferences are wrong. We reevaluate how much weight we give to some source in response. Our individual internal models of the world as it exists in language (but they're based on far more than merely our language) is "conditioned" by our experience of the world. An LLM, as-is, has nothing like that other than more examples of language (which isn't nothing, but it's not enough).
I followed the link to that ChatGPT answer in the Russia thread, but I didn't read it. I didn't evaluate it for correctness because why would I? My 11 year-old nephew can tell me many true things about the world. He knows a lot about dinosaurs! But in general he's not a reliable source. I happen to know that he knows a great deal about dinosaurs. What if I didn't? What if I had no clue about dinosaurs myself, nor how or where he's getting info about dinosaurs? If I didn't know him at all? That he confidently tells me things about them is misleading, because younger children are usually confident about what they think they know. I'd have to check a much more credible authority about what he's asserted, and then — if I'd wanted to know about dinosaurs — why didn't I consult that authority in the first place? And maybe I do consult an authority, but the other five people who were there for his infodump don't bother?
My mother is in the early stages of dementia and so her understanding of the tech she uses is less than it once was. Last year I discovered she now thinks that Google results are true, because she's thought they were curated sonehow, that they were filtered for truth. Of course they're not. But given her reduced comprehension, it was difficult to explain how she was misunderstanding what's happening when she googles.
This is what's happening now with LLMs. People have not yet learned that authoritative-seeming answers at the push of a button are often worse than no answer at all.
An LLM is much worse for reliability than Wikipedia is, and it's taken years for people to learn to be careful with Wikipedia. ChatGPT is much less reliable than Wikipedia, but for some reason some tech companies have bolted it onto their search engines.
I've kept saying "as-is" about LLMs. I think that an LLM could be very useful as a modular piece of an information reference that has other modules that do filter (pre or post) on truth value (well, reliability thereof). But ChatGPT isn't that.
We all seem to agree that LLM's shouldn't be used to answer questions in AskMe. That's because the credibility of the answers in AskMe are core to its utility and, as it happens, the fallibility of mefites in AskMe has been a perennial topic of debate. We've complained about people just looking stuff up on Wikipedia to answer questions because that's not really what questioners are looking for when they ask questions on AskMe. So answering AskMe questions with an LLM answer is something this community pretty overwhelmingly opposes, rightly.
In something like the Russia thread on the blue, where someone is asking a question about Russian language usage, the reliability of the answer is as important as it is in AskMe. At this state of the tech, an LLM answer is wildly inappropriate. It doesn't matter if it happens to be correct in a given instance. The fact that it's often wrong poisons the well. The practice, if allowed, will result in a net negative. I mean, sure, a given mefite's answer may well be even less reliable than ChatGPT. But this is something everyone is aware of. There's no similar awareness with regard to ChatGPT. Quite the opposite, it seems. So labeling it doesn't help.
I wrote my comment in that thread as emphatically as I did very deliberately. This was a misuse of the technology and, more to the point, a misuse of MetaFilter.
posted by Ivan Fyodorovich at 5:22 PM on June 27, 2023 [17 favorites]
This is entirely missing the point. And I'm extremely alarmed that people everywhere don't understand the point.
I'm in the really weird position of being caught in the middle of the popular debate about "AI". One camp takes the extreme "it's not doing anything interesting and is just IP theft on a grand scale" and the other camp takes the extreme and credulous view. It's like everyone just projects onto this tech their fears and hopes with apparently close to no understanding of how it works and what it's good for and what it's not good for.
Over the last few months, I've thought of about fifty different ways this tech could be used to do stuff that's not been possible before now. But treating an example like ChatGPT as some oracle that can dispense the world's knowledge is not one of those uses.
While there are already limited integrations of external validation, as-is an LLM is trained indiscriminately on vast quantities of text, independent of truth. For example, if you trained an LLM exclusively on the world's fictional literature, it would perform well on a number of tasks and some of them — many of them — would have a rough correspondence to things that are true in the real world. Such as, say, "how old are children?" or "what happens if you hit a window with a hammer". Others would obviously be untrue. An LLM, as-is, is pretty agnostic about truth.
There's an enormous amount of real-world information implicit in a LLM because there's necessarily an enormous amount of real-world information embedded in language. An LLM really represents relationships within language. But these relationships correspond to facts just about as reliably as language itself does. Which is to say, very approximately.
"Hallucinate" really is a poor word to use for when an LLM makes something up. It's always "making something up". It's inference, which is what it's always doing. It's what we do. But when we do it, we're doing so within a context where our inferences our constantly being tested by experience in the real-world. And we often discover our inferences are wrong. We reevaluate how much weight we give to some source in response. Our individual internal models of the world as it exists in language (but they're based on far more than merely our language) is "conditioned" by our experience of the world. An LLM, as-is, has nothing like that other than more examples of language (which isn't nothing, but it's not enough).
I followed the link to that ChatGPT answer in the Russia thread, but I didn't read it. I didn't evaluate it for correctness because why would I? My 11 year-old nephew can tell me many true things about the world. He knows a lot about dinosaurs! But in general he's not a reliable source. I happen to know that he knows a great deal about dinosaurs. What if I didn't? What if I had no clue about dinosaurs myself, nor how or where he's getting info about dinosaurs? If I didn't know him at all? That he confidently tells me things about them is misleading, because younger children are usually confident about what they think they know. I'd have to check a much more credible authority about what he's asserted, and then — if I'd wanted to know about dinosaurs — why didn't I consult that authority in the first place? And maybe I do consult an authority, but the other five people who were there for his infodump don't bother?
My mother is in the early stages of dementia and so her understanding of the tech she uses is less than it once was. Last year I discovered she now thinks that Google results are true, because she's thought they were curated sonehow, that they were filtered for truth. Of course they're not. But given her reduced comprehension, it was difficult to explain how she was misunderstanding what's happening when she googles.
This is what's happening now with LLMs. People have not yet learned that authoritative-seeming answers at the push of a button are often worse than no answer at all.
An LLM is much worse for reliability than Wikipedia is, and it's taken years for people to learn to be careful with Wikipedia. ChatGPT is much less reliable than Wikipedia, but for some reason some tech companies have bolted it onto their search engines.
I've kept saying "as-is" about LLMs. I think that an LLM could be very useful as a modular piece of an information reference that has other modules that do filter (pre or post) on truth value (well, reliability thereof). But ChatGPT isn't that.
We all seem to agree that LLM's shouldn't be used to answer questions in AskMe. That's because the credibility of the answers in AskMe are core to its utility and, as it happens, the fallibility of mefites in AskMe has been a perennial topic of debate. We've complained about people just looking stuff up on Wikipedia to answer questions because that's not really what questioners are looking for when they ask questions on AskMe. So answering AskMe questions with an LLM answer is something this community pretty overwhelmingly opposes, rightly.
In something like the Russia thread on the blue, where someone is asking a question about Russian language usage, the reliability of the answer is as important as it is in AskMe. At this state of the tech, an LLM answer is wildly inappropriate. It doesn't matter if it happens to be correct in a given instance. The fact that it's often wrong poisons the well. The practice, if allowed, will result in a net negative. I mean, sure, a given mefite's answer may well be even less reliable than ChatGPT. But this is something everyone is aware of. There's no similar awareness with regard to ChatGPT. Quite the opposite, it seems. So labeling it doesn't help.
I wrote my comment in that thread as emphatically as I did very deliberately. This was a misuse of the technology and, more to the point, a misuse of MetaFilter.
posted by Ivan Fyodorovich at 5:22 PM on June 27, 2023 [17 favorites]
Yep, that was a bad comment, and against what I thought the spirit of the rules were. We should discourage people just feeding someone else's comment into an LLM and the posting a link to that. If people wanted that, they could just do that themselves.
posted by sagc at 6:05 PM on June 27, 2023 [8 favorites]
posted by sagc at 6:05 PM on June 27, 2023 [8 favorites]
I request a link to the specific comment you are discussing or, if it was deleted, the mod deletion notification comment in the relevant thread.
posted by brainwane at 3:47 AM on June 28, 2023
posted by brainwane at 3:47 AM on June 28, 2023
It has been deleted, despite the last mod note saying it's within the rules, and there are still references to it.
No idea what's going on or whether the rules have changed or were misinterpreted in the first place.
posted by sagc at 5:57 AM on June 28, 2023 [4 favorites]
No idea what's going on or whether the rules have changed or were misinterpreted in the first place.
posted by sagc at 5:57 AM on June 28, 2023 [4 favorites]
The link was just the words "Chat GPT4's answer, fwiw" and a link to ChatGPT's description of... something. It should have been deleted earlier but we were having a conversation about whether to delete and prune back the derail of people talking about it or just leave it be since we don't have a hard-coded policy yet, just the FAQ entry, and the policy as of today is, outside of AskMe, a comment like that is OK-if-labeled though as part of our discussion both here and with the mod team it seems like it probably shouldn't be okay. And one thing we've heard loud and clear from the user base is that mod decisions should be grounded in policy and this policy is emergent.
So, there was some hesitation as we talked it over, Brandon and I both left notes, and loup ultimately decided to nix the original comment.
posted by jessamyn (staff) at 10:36 AM on June 28, 2023 [4 favorites]
So, there was some hesitation as we talked it over, Brandon and I both left notes, and loup ultimately decided to nix the original comment.
posted by jessamyn (staff) at 10:36 AM on June 28, 2023 [4 favorites]
I find this to be an odd deletion. It was clearly labeled, and didn’t seem wrong or misleading, but even if it was wrong, it wasn’t misrepresented. I guess it would be good to clarify site policy fast, since even discussion here in recent days/weeks hasn’t managed to catch up to where we apparently are today.
posted by snofoam at 6:43 PM on June 28, 2023 [1 favorite]
posted by snofoam at 6:43 PM on June 28, 2023 [1 favorite]
The site can have whatever policy it wants on LLMs. I know this area is changing quickly right now, so it is understandable that there would be a lag in figuring it all out.
In this specific case, the content was on topic, and it was an answer to an on-topic question in the thread. To me, a bunch of comments dunking on ChatGPT without actually addressing the content were the derail. The post already had a disclaimer that it was from ChatGPT and I think people here already know what that means.
By contrast, there were multiple cases in the same thread where someone posted incorrect information or information from a suspicious source. These were generally pointed out in a constructive way and didn't derail the discussion. To me, this is how Metafilter should work.
I think people should be free to voice their opinions, and discomfort with LLMs is certainly a legitimate one. I also think Metafilter has a vulnerability where users derailing conversations or making them contentious can lead to de facto policy changes because those things become too difficult to deal with. I'm not a fan of this and I feel like that is what is happening here to some degree.
posted by snofoam at 5:13 AM on June 29, 2023 [3 favorites]
In this specific case, the content was on topic, and it was an answer to an on-topic question in the thread. To me, a bunch of comments dunking on ChatGPT without actually addressing the content were the derail. The post already had a disclaimer that it was from ChatGPT and I think people here already know what that means.
By contrast, there were multiple cases in the same thread where someone posted incorrect information or information from a suspicious source. These were generally pointed out in a constructive way and didn't derail the discussion. To me, this is how Metafilter should work.
I think people should be free to voice their opinions, and discomfort with LLMs is certainly a legitimate one. I also think Metafilter has a vulnerability where users derailing conversations or making them contentious can lead to de facto policy changes because those things become too difficult to deal with. I'm not a fan of this and I feel like that is what is happening here to some degree.
posted by snofoam at 5:13 AM on June 29, 2023 [3 favorites]
I have an idea for something to add to our policies somewhere (not sure where):
Enjoying things is good. You do not have to apologize for enjoying media that you do not agree with politically, or that others find fault with.
Critiquing things is good. You do not have to apologize for saying why a work of art doesn't work for you, or for pointing out how its politics don't live up to our values.
MeFites will like or dislike the same pieces of entertainment and that doesn't imply any of them are doing anything wrong.
posted by brainwane at 11:06 AM on July 2, 2023 [2 favorites]
Enjoying things is good. You do not have to apologize for enjoying media that you do not agree with politically, or that others find fault with.
Critiquing things is good. You do not have to apologize for saying why a work of art doesn't work for you, or for pointing out how its politics don't live up to our values.
MeFites will like or dislike the same pieces of entertainment and that doesn't imply any of them are doing anything wrong.
posted by brainwane at 11:06 AM on July 2, 2023 [2 favorites]
You are not logged in, either login or create an account to post comments
posted by grouse at 2:02 PM on June 21, 2023 [7 favorites]