664 points by namnnumbr 7 days ago | 259 comments on HN
| Strong positive
Contested
Low agreement (3 models)
Editorial · v3.7· 2026-03-15 22:41:03 0
Summary Free Expression & Communication Ethics Advocates
Stop Sloppypasta advocates for ethical communication practices in digital workplaces, arguing that unvetted forwarding of LLM-generated content violates the dignity and effort of recipients. The site champions free expression and informed deliberation while critiquing how careless AI use degrades discourse quality, trust, and intellectual labor equity. Content directly engages Articles 1, 18, 19, 23, 26, and 29 (dignity, thought, expression, work, education, responsibility) through structured examples and reasoning.
Rights Tensions3 pairs
Art 19 ↔ Art 23 —Content champions free expression of AI-generated ideas (Article 19) while critiquing unvetted forwarding as unfair labor extraction from recipients (Article 23); resolution favors expression bounded by effort ethics.
Art 18 ↔ Art 19 —Site advocates for freedom of thought requiring critical engagement (Article 18) while protecting free expression including unvetted forwarding (Article 19); resolution privileges thoughtful expression over raw forwarding.
Art 26 ↔ Art 19 —Educational development through effortful thinking (Article 26) tensions with right to freely share information (Article 19); site resolves by distinguishing thoughtful sharing from careless forwarding.
Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead
sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.
I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.
This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.
How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?
I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.
I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.
I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.
And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.
I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?
It is easy to do in social media because the context is global but in enterprises it is a bit harder.
Something like "flagged as very likely untrue by AI" is something I would really appreciate.
I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.
When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.
They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.
They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.
If I was a bot I would probably write some perfectly punctuated garbage about how your site is a crucial testament to the ever evolving digital landscape or use big words to delve into the multifaceted tapestry of internet ethics. But honestly your website about stopping sloppy pasta is just so dumb and a complete waste of time. Your acting like somebody writing a fake story with ai is the end of the world or something. Literaly nobody cares if some random article was written by a computer so maybe stop pretending your the heroic saviors of the web. Get a real hobby and stop whining about people using chat bots because its really not that deep bro.
- now the fun part: which AI did I use to write the above?
Talking with middle managers in fortune 100 companies, I often get 'send us the documents so we can make a decision'. It used to be that we carefully wrote things and no one would read them. Now we send 3000 pages of AI crap to make sure no one reads it and then we get approved to start working. Not great but the old situation was worse; no one would read anything and ask you to read it for them on a conference call with 36 people; now that does not happen anymore.
What's interesting is that there are probably people who could spend a year happily working with an AI "coworker" without knowing it was an AI, but then get upset and change their viewpoint after learning the truth.
>"I asked Claude about this! Here's what it said:"
>"ChatGPT says:"
My policy suggestion is that we need to completely people quoting ChatGPT. That's legit, that's not a bannable offense, not against any policy.
The author wastes time talking about this case, and even does it first before talking about the much worse case:
>"The sender shares AI output as their own work, with no indication a chatbot wrote it."
This is 100 times worse, and is objective rather than subjective. If the author admits it's AI when confronted it kills their reputation, (if they don't admit it and turns out it is AI, it's fraud, fireable offense)
Putting these 2 categories of AI use wastes breath and conflates the two, the message will not be clear at all.
What's worse, such a policy actually has the effect of increasing undisclosed AI use. This is a specific case of the general case: banning all AI usage increases unregulated AI usage. Everyone who prohibited employees from using AI in 2024 knows that what you get is undisclosed AI use or content you are not sure is AI written or not. If you give a specific way to use AI, you can add features like auditability, supply chain control, and you can remove any outs from employees and users that do not comply with the policy.
Dealing with people who copy-paste unread slop into emails is probably not a huge issue for most of us. There's much more slop out there masquerading as blog posts, HN comments, etc.. It's not a huge issue yet, but there have definitely been times when I found myself midway through reading something and realizing it's just a LLM wasting my time.
I'm starting to be reminded of Neal Stephenson's "Diamond Age". He described a future in which people walked around with a nearly invisible defensive army of nanobots surrounding them whose job it was to counter the offensive nanobot swarms of their enemies. Characters in this novel would go about their business while an unseen nanobot war took place in the air around them.
We're rapidly reaching the point where we will need AI to defend us from AI. i.e. We will soon need agents filtering all that we read and removing slop, just so we can preserve our time and attention for things that are human and real.
I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.
I've had some luck pointing out where the AI is wrong in their sloppypasta, delicate as one can. Avoiding shame or embarrassment can be a powerful motivator.
The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.
I wrote this intending it to be directly sharable and/or to provide a framework for how to have that discussion, kind of like a nohello.net or dontasktoask.com.
I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior.
It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?
> I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.
Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.
I'm glad that the term "slop" really caught on. It's such a succinct way to describe the phenomenon, and at the same time it's so malleable. Sloppypasta, Microslop, Workslop, Ensloppification, etc.
Oh, I 100% acknowledge the site itself was LLM generated. I'm not a web designer, so I needed a lot of help making a visually appealing site, even if that design language is at this point LLM trope.
However, the essay and the guidelines were all human-written!
I don’t mind this so much if they don’t know anything about the subject themselves. What bothers me is when they then copy it at domain experts as if it makes them qualified to talk.
I don't think that "it's more of the same" is a good way to think about it. The internet contained a lot of low-quality content, but even low-quality content used to be fairly expensive and time-consuming to produce. Further, you could immediately discern bottom-of-the-barrel content-farmed nonsense by the writing style alone. Now, LLMs make it practically free to generate unlimited amounts of slop that drowns out human-written stuff, and they can imitate the style hints we used to depend on for quick screening.
Your ellipsis leaves out the answer to your question. The paragraph is contrasting "ChatGPT says" which is annoying, but transparent (as LMGTFY), with "sloppypasta" which includes no such indicator.
Admittedly, the paragraph is somewhat confusingly written. Also probably written by an LLM.
when a truth is revealed to someone operating under a totally different understanding of a situation, it can be confusing, disorienting and upsetting.
this seems reasonable to me, especially in this transition period where we're navigating ethical and respectful collaboration that involves AI. give people a little grace in this weird new world.
A lot of middle management is reading documents from those below them, giving feedback to improve the clarity of the doc, and then provide their thoughts and comments on the doc.
This is one role that I can't tell if it's completely useless in an AI powered world, or if that's basically what we all end up doing, reviewing and commenting on the work versus actually making it.
I am sorry, but in what way is everyone letting the "We've been creating bait content for a long time" comment slide?
Did you even read the article? It is about person to person interactions. The three examples weer:
* Someone butting in to an ongoing discussion with a solution (but it's generic and misfitting AIslop)
* Someone being asked for their expertise and responding (but it's generic and misfitting AIslop)
* Someone comes with a problem thesis looking for help (but it's generic and misfitting AIslop)
The only one of these that existed prior to AI was the middle one, and the article very specifically calls out how transparent it used to be, because it had the shape of a google link.
The first one would be impossible because the person would have to either write an unhelpful response, and they wouldn't find the words at length. You could ignore them or pick it apart easily. The last one would be impossible unless if they were copy pasting from a large PDF, which would look nothing like a chat message.
What kind of workplace hellscape do you work on where people posting low effort bait on SLACK was the norm? The premise of this reply is entirely non-sensical.
> The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.
Reading AI generated prose, even if it’s my prompt, always gives me the same feeling as when I read a LinkedIn post: Like a simple concept was stretched into an unnecessarily long, formulaic format to trick the reader into thinking it was more than it was.
Everyone taking their scraps of thoughts and putting them into an LLM likes it because the output agrees with them. It’s flattering. But other people don’t like it because we have to read walls of text to absorb what should have been a couple of their scattered bullet points.
Just give me the bullet points. Don’t run it through the LLM expander. That just wastes my time.
Content directly champions freedom of expression and opinion. Core argument is that unvetted forwarding of AI content violates the spirit of free expression by preventing thoughtful discourse and reducing information quality.
FW Ratio: 57%
Observable Facts
Page title emphasizes 'Don't paste raw LLM output at people.'
Content critiques how sloppypasta 'blocks the discussion already underway.'
Site provides structured space for discussing communication ethics without censorship.
Further Reading section suggests additional sources for critical engagement.
Inferences
Central advocacy for thoughtful communication directly supports Article 19's protection of free expression.
Critique targets the mechanism (unvetted forwarding) that degrades discourse quality, not the ability to express views.
Site structure enables readers to form and share informed opinions on the topic.
Content advocates for dignity, freedom from deception, and shared responsibility in communication. Frames ethical communication as foundational to human cooperation and mutual respect.
FW Ratio: 60%
Observable Facts
Page presents ethos of human dignity through framing communication ethics as central concern.
Content emphasizes 'dignity of the basic human reply' and respect for recipient effort.
Examples section labels each scenario with observable consequences (unrequested, generic, presented as personal work).
Inferences
The emphasis on dignity and mutual respect aligns with Preamble's foundational values of freedom and equality.
Framing sender-recipient asymmetry as an ethical issue reflects commitment to human welfare over convenience.
Content directly addresses education and cultural development. Emphasizes that thoughtful communication and critical thinking are essential to human development. Sloppypasta undermines learning and intellectual growth.
FW Ratio: 57%
Observable Facts
Page emphasizes that 'Writing requires effort, which contributes to comprehension.'
Content describes how LLMs reduce the 'struggle' necessary for cognitive development.
Site includes Why It Matters section with expanded educational reasoning and pull quotes.
Further Reading section provides curated resources for deeper learning.
Inferences
Direct critique of practices that undermine learning supports Article 26's protection of education.
Interactive site design promotes active learning rather than passive consumption.
Pull quotes and expanded reasoning encourage critical engagement with ideas.
Content directly addresses equality and dignity in communication contexts. Criticizes practices that treat recipients as passive receptacles rather than equal participants deserving consideration.
FW Ratio: 60%
Observable Facts
Page states: 'The asymmetric effort makes it rude' regarding sender behavior.
Content contrasts treatment of recipients as 'equal participants' versus passive consumers of unvetted content.
Examples section presents four named patterns with equal analytical weight.
Inferences
Critique of effort asymmetry reflects commitment to Article 1's principle that all humans should be treated as equals.
Naming patterns (Eager Beaver, OrAIcle, Ghostwriter) humanizes the issue rather than moralizing.
Content directly advocates for freedom of thought and conscience in communication contexts. Critiques practices that bypass individual judgment ('recipients are left to figure out') and emphasizes thinking critically about sources.
FW Ratio: 60%
Observable Facts
Page states: 'Recipients are left to figure out whether it's AI generated, whether it's correct, and which part actually answers the question.'
Content emphasizes the need for 'validation or critical review' before sharing.
Why It Matters section includes expandable sections requiring user interaction to reveal full reasoning.
Inferences
Advocacy for critical evaluation of information sources protects freedom of thought.
Interactive design structure requires users to engage consciously rather than passively receive.
Content addresses participation in cultural life and benefit of scientific progress. Emphasizes that authentic contributions to knowledge and communication are forms of cultural participation. Unvetted AI forwarding dilutes this.
FW Ratio: 60%
Observable Facts
Page identifies four named patterns of sloppypasta, treating each as a distinct social phenomenon worthy of analysis.
Content emphasizes that recipients should understand 'perspective and expertise' rather than generic responses.
Rules section frames best practices as learnable cultural norms that contributors can adopt.
Inferences
Advocacy for authentic contribution respects right to participate in cultural and intellectual life.
Clear attribution and analysis support intellectual property rights and cultural participation.
Content advocates for protection from discrimination based on communication practices. Emphasizes that all communicators deserve equal respect regardless of whether they use AI assistants.
FW Ratio: 60%
Observable Facts
Page does not stigmatize AI use itself, only the practice of forwarding unvetted output.
Content acknowledges 'intention is good' for Eager Beaver pattern despite problematic execution.
Examples present behaviors as learnable mistakes rather than character flaws.
Inferences
Non-judgmental framing protects dignity of senders while critiquing behavior, reducing discrimination.
Structural equality in presentation suggests no particular group is targeted for exclusion.
Content addresses right to own property implicitly through critique of intellectual labor. Ghostwriter pattern reflects uncompensated appropriation of recipient's verification labor and sender's intellectual credibility.
FW Ratio: 60%
Observable Facts
Page states: 'If the content turns out to be wrong, that credibility is what gets spent' regarding Ghostwriter pattern.
Content describes how sender 'borrows the sender's credibility' without attribution.
All examples are labeled and contextualized rather than presented as original site content.
Inferences
Critique of credential appropriation protects intellectual property and labor value.
Emphasis on disclosure respects both sender and recipient's ownership claims.
Content addresses social and cultural rights implicitly. Emphasizes individual responsibility within communities and the cultural value of thoughtfulness, effort, and authentic contribution.
FW Ratio: 60%
Observable Facts
Page addresses communication patterns within contemporary workplace cultures.
Content emphasizes shared responsibility for maintaining communication quality in group settings.
Multiple examples show context-specific communication failures across different platforms.
Inferences
Advocacy for maintaining communication norms supports cultural values of thoughtfulness and integrity.
Recognition of context-specific etiquette respects cultural variation in communication practices.
Content directly addresses community responsibility and duties. Core argument is that communicators have responsibility to communities they participate in. Sloppypasta violates this by externalizing effort.
FW Ratio: 60%
Observable Facts
Page frames sloppypasta as 'etiquette failure' showing 'disregard for recipient' within communities.
Content discusses scenarios in collaborative team contexts emphasizing shared responsibility.
Rules section uses language of best practices and norms rather than prohibition or punishment.
Inferences
Critique of communication irresponsibility supports Article 29's emphasis on community duties.
Non-coercive framing respects autonomy while promoting responsibility.
Content addresses freedom of movement implicitly through critique of how unvetted AI text 'buries the live discussion' and blocks communication flow within communities (Slack, Teams, email).
FW Ratio: 60%
Observable Facts
Page notes that unvetted text 'blocks the discussion already underway' and forces participants to scroll past it.
Content describes scenarios in collaborative communication platforms (Slack, Teams, Notion).
Site is freely accessible with no barriers to entry.
Inferences
Critique of communication blockage relates to freedom of information exchange within communities.
Open access structure supports Article 13's principle of free movement of ideas.
Content advocates for freedom of peaceful assembly implicitly. Examples describe how sloppypasta disrupts group communication in collaborative platforms (Slack, Teams). Rules section emphasizes responsible participation in shared spaces.
FW Ratio: 60%
Observable Facts
Page addresses communication patterns in collaborative team contexts (Slack, Teams, email).
Content notes how unvetted text 'blocks the discussion already underway' and creates friction in group settings.
Rules section establishes norms for respectful participation without restriction on who can learn them.
Inferences
Critique of communication practices that disrupt group discourse relates to right to peaceful assembly.
Emphasis on group-aware etiquette supports collective participation norms.
Content addresses adequate standard of living implicitly through critique of communication practices that reduce efficiency and increase cognitive burden in work contexts.
FW Ratio: 60%
Observable Facts
Page describes how unvetted forwarding creates 'additional verification burden' on recipients.
Content notes that LLMs 'increase cognitive debt by reducing struggle.'
Site provides dark mode support and clear typography supporting user well-being.
Inferences
Critique of practices that increase cognitive burden relates to maintaining adequate standard of living and well-being.
Accessible site design respects user dignity and wellness.
Content implicitly addresses privacy of communications by critiquing the practice of forwarding unvetted content without contextual editing or framing. Recipients' intellectual privacy is implicitly protected.
FW Ratio: 60%
Observable Facts
Page does not include visible analytics, tracking pixels, or data collection mechanisms.
Content emphasizes respecting recipient's right to understand what they are reading (disclosure of AI origin).
Examples highlight problem of 'no one knows to check' regarding Ghostwriter pattern.
Inferences
Advocacy for transparency in communication sources protects recipients' informational autonomy.
Site's lack of tracking respects reader privacy in practice.
Content implicitly addresses work and fair conditions by critiquing how unvetted AI forwarding creates asymmetric labor burdens. Recipients must do verification work senders avoided.
FW Ratio: 60%
Observable Facts
Page states: 'When someone forwards text they themselves have not considered, they are asking you to do work they chose not to do.'
Content describes 'asymmetric effort' and 'cognitive debt' as consequences of sloppypasta.
Content implicitly addresses social and international order. Argues that responsible communication practices are necessary for functioning societies and organizations.
FW Ratio: 50%
Observable Facts
Page addresses communication in international work contexts through examples.
Content treats communication ethics as universal concern applicable across platforms and contexts.
Inferences
Emphasis on shared communication norms supports functioning of social order.
Global platform examples suggest universal applicability of principles.
Content indirectly addresses democratic participation by emphasizing transparency and informed decision-making in group contexts. Sloppypasta impairs the shared deliberation necessary for democratic functioning.
FW Ratio: 50%
Observable Facts
Page discusses scenarios involving team decision-making (market expansion example).
Content emphasizes that recipients need 'perspective and expertise' not generic responses.
Inferences
Advocacy for informed group deliberation relates to democratic participation in organizations.
Emphasis on authentic contribution rather than generic forwarding supports participatory legitimacy.
Site provides platform for expressing and sharing critique of communication norms. Multiple perspectives presented (recipient, sender, feedback loop). Further reading section supports information access.
Site avoids shaming language; instead frames problems as etiquette lapses that can be learned and improved. Navigation labels treat all sections equally.
Site itself maintains good accessibility and respects user well-being through readable design, light/dark mode support, and non-manipulative structure.
Site itself functions as educational resource with structured examples, reasoning tables, and further reading section. Encourages active learning through interactive elements.
Site design promotes active engagement through expandable sections, explicit reasoning, and multi-perspective tables. Users must engage intellectually rather than passively consume.
Site respects intellectual contributions by clearly attributing examples, labeling patterns, and providing structural analysis. Supports participation in evolving discourse about AI ethics.
Site structure gives equal visibility to different perspectives (sender/recipient/feedback loop) and treats all examples with consistent analytical framework.
Site structure acknowledges responsibilities without coercion. Rules section presents best practices as shared norms rather than enforcement mechanisms.
Terms like 'sloppypasta,' 'enshittified,' 'etiquette failure,' and 'ghostwriter' carry strong negative valence designed to frame unvetted AI forwarding as inherently disrespectful.
appeal to fear
Repeated emphasis on 'hallucinated details,' 'trust but verify is broken,' and 'all correspondence must be untrusted by default' creates anxiety about AI reliability.
causal oversimplification
Content attributes broad problems (reduced effort, lost trust) directly to AI forwarding practices without acknowledging other contributing factors (workplace culture, information overload, tool design).