LLM-wielding students seem to be a new "attack vector" in this and many other repos #1952
Replies: 7 comments 7 replies
-
|
Agreed. I for one have been trying not to be too heavy handed or confrontational in order not to turn away potential new open source contributors but I acknowledge that this tendency has serious downsides. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @nickchomey for referencing Ghostty terminal's contribution guidelines they looks solid. Also I think we can stop PR without assigned issue — it is much simpler to review issue than PR. |
Beta Was this translation helpful? Give feedback.
-
|
What I suggest is that we do not force a formal process or template on contributors. We should accept contributions in whatever form folks can provide, and then politely work with them to get it into shape. When someone is far off-base, we can point them to our contribution guidelines. More than once if needed. Recently we’ve also seen a rash of low-effort AI-generated submissions. In the last few days we’ve been more aggressive about closing issues that are in the "AI Slop" category. I think we should keep this up, to protect the time of leadership. Here’s the behavior I’m proposing (and I am open to other suggestions):
This is just my 2 cents. I know your time is valuable as leads for contributions and I want to respect it. I’m hoping that being quicker to close non-actionable items addresses some of your concerns. |
Beta Was this translation helpful? Give feedback.
-
|
I appreciate the desire to be kind and diplomatic (I was actually quite positively struck by it when I and others started asking questions about Fetch Metadata for CSRF), but I suspect it won't work in this situation because these are not good-faith efforts and they're not potential contributors who you should want to cultivate. There's a significant difference between someone genuinely trying to help and needing some guidance, and lazily, selfishly and disrespectfully submitting AI slop issues and PRs to get something on your CV. Consider following curl's lead https://curl.se/.well-known/security.txt
|
Beta Was this translation helpful? Give feedback.
-
|
Hey @nickchomey we can always be diplomatic and there is no excuse to not be as leaders. Please do not let this stress you out. If anyone is getting upset at suspected AI content, assign it to me and I'll take care of if. Leaders. Put your swords down. We must always be kind and diplomatic, even when closing out AI slop issues. Even when blocking repeat offenders. Even when we get upset. |
Beta Was this translation helpful? Give feedback.
-
|
> We will ban you and ridicule you in public if you waste our time on crap reports. This is horrific and I do not at all support it. Banning is ok. Ridicule is not. Please always be kind and assign to me if you have an issue with it if you can no longer be kind. I realize this topic is stressing you all out. Please don't let it. Just pass the junk to me and I'll sort through it. I submit this to you all with respect. I do not want any of you to suffer or have your time wasted. |
Beta Was this translation helpful? Give feedback.
-
|
This touches on a broader AI governance challenge. From a GRC perspective, the issue of LLM-generated contributions to security-critical resources like OWASP cheat sheets is essentially a supply chain integrity problem. Organizations that reference OWASP guidance in their security policies and control frameworks are trusting the quality and accuracy of that content. If AI-generated changes introduce subtle inaccuracies into security guidance, the downstream impact on compliance programs and risk assessments could be significant. The approach @jmanico described — clear contribution policies, maintainer gatekeeping, and not hesitating to close low-quality submissions — aligns well with how organizations handle change management controls (e.g., NIST 800-53 CM-3). Having a defined, enforceable process for vetting contributions is really the same principle as configuration change control applied to documentation. Appreciate the transparency in how the maintainers are addressing this. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
It is abundantly clear to me that this repo (and seemingly many others) are being spammed by LLM-wielding students who are eager to put something on their CV.
There's been some seemingly helpful PR that make small (eg http->https) changes, but there's a lot of very concerning stuff going on that I dont think the maintainers are at all aware of yet.
#1927 is the most egregious example of this, but #1937 is completely of this nature, I opened #1936 but the only two responses I got were clearly like this as well. Even after I rejected the terrible proposal there, they went ahead and submitted a PR (#1949) anyway, which @jmanico started treating seriously. I've noticed other maintainers (eg @szh in #1937) doing the same.
You created an AI-disclosure policy and PR template (which only seems to be sporadically used), but there's clearly nothing stopping these people from flouting it all. Moreover, even if they say they "manually reviewed" the AI output, what value does that have when they are self-interested ESL students with limited knowledge/experience?
Given that this is a premier source of security guidance, it seems like it would be prudent for your team to give serious thought towards how to be significantly more vigilant going-forward with guarding against this issue.
edit: To be clear. I say this all with significant gratitude for the work you have done. It is an alert and call to action, rather than any sort of castigation. I dont really know what can be done about it, but I am sure that all of open source is suffering from the same problem.
Ghostty terminal seems to have a comprehensive approach to all of this: https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTING.md
They dont even allow anyone to open issues (and presumably PRs must stem from an approaved issue) - you have to start a discussion first.
I hope this helps
Beta Was this translation helpful? Give feedback.
All reactions