nfriedly 15 hours ago

I maintain a handful of Open Source projects written in JavaScript and TypeScript, a couple of which are fairly popular, and I don't think I've seen any of this. Maybe it just hasn't reached the JavaScript world yet?

The one project is a rate limiter, and for a while I was getting a fair number of bug reports that boiled down to configuration mistakes, such as accidentally rate limiting the load balancer/reverse proxy rather than a specific end user. I implemented a handful of runtime checks looking for common mistakes like that, each logging a one-line warning with a link to a wiki page that gave more details. Since then, the support burden has come down dramatically.

  • ajross 14 hours ago

    Zephyr isn't getting any either that I've seen. The projects in evidence in the article are python and curl, so it's likely limited to only the highest profile targets.

    What would be interesting is who's doing it and why. The incentives wouldn't seem to be malicious, there's no attempt a-la xz-utils to boost credentials for a real human. Honestly if I had to guess it's an AI research group at Microsoft or wherever trying to tune their coding bots.

    • techjamie 12 hours ago

      Curl offers a monetary reward as part of their bug bounty program, so that is a contributing factor in their case.

      It seems to me like talentless hacks looking for ChatGPT to get them easy money/cred without any actual work.

      • nfriedly 12 hours ago

        Yeah, that's probably part of it - none of my projects have any bug bounty.

  • BeetleB 14 hours ago

    [flagged]

    • dang 9 hours ago

      Please don't do this here.

LittleTimothy 15 hours ago

Reading the original bug report and the response by the maintainers.... yeeash. They are way more tolerant than they should be. At this point I feel the structure and verboseness of the original report are just massive flashing red flags that this is AI slop. The verbosity and detail are immediately and obviously out of line with the complexity - a better report would've been "Yo dawg I think there's a buffer overflow in this strcpy". The first thing I would've replied is "Are you sure" because I think the AI would've immediately done the classic "You're absolutely right that..." without anyone bothering to look at the code.

I think the natural response will just be lower responsiveness from the maintainers to anonymous reports.

  • Trasmatta 15 hours ago

    Even the responses were so obviously written by AI:

    > I used to love using curl; it was a tool I deeply respected and recommended to others. However, after engaging with its creator, I felt disrespected, met with a lack of empathy, and faced unprofessional behavior. This experience has unfortunately made me reconsider my support for curl, and I no longer feel enthusiastic about using or advocating for it. Respect and professionalism are key in any community, and this interaction has been disappointing.

    Some of the maintainers tried to keep engaging at that point, but it's so clearly just ChatGPT!

    • Swizec 14 hours ago

      Note that this also sounds exactly like the median corporate email. I could see my coworkers writing this. The higher into middle management they are, the likelier.

    • Bluestein 15 hours ago

      > This experience has unfortunately made me reconsider my support for curl

      ... "delving deeper ..." :)

  • LtWorf 15 hours ago

    What's the difference between anonymous and an account used by a machine?

    • a_wild_dandan 13 hours ago

      friction

      • LtWorf 7 hours ago

        They sells stars for nothing. There's loads of automated accounts in use today already.

gauge_field 15 hours ago

I had encountered several spamming bots. In all instances, I reported the issue to github support. They were very quick (beyond my expectation) to respond. I suggest others do the same. It has been really easy and affective as far as my experience goes.

JoeAltmaier 16 hours ago

Who thinks they are contributing something when they do crap like that? Set up an AI to bomb a group working for free to do something they value. Gotta be a kid with no sense.

  • hyhconito 15 hours ago

    I know a guy who does this. He finds a problem, then tells ChatGPT about it. Then ChatGPT elaborates it into dross. He says look at the magical output, without reading it or bothering to understand it. Then posts it to the vendor. The vendor gets mislead and the original issue is never fixed. Then I have to start the process again from scratch after 2 weeks are wasted on it.

    The root cause is that LLMs are a damage multiplier for fuckwits. It is literally an attractive hammer for the worst of humanity: laziness and incompetence.

    I imagine that could be weaponised quite easily.

    • Bluestein 15 hours ago

      > The root cause is that LLMs are a damage multiplier for fuckwits.

      Reminds me of Eco's quote about giving the "village idiot" a megaphone. But, transposed to the age of AI.-

      • hyhconito 15 hours ago

        It's much worse than that. It's giving the village idiot something which turns their insane ramblings into something that is incredibly verbose and sounds credible but inherits both the original idiot's poor communication and the subtle ramblings of an electronic crackhead in it.

        Bring back the days of "because it's got electrolytes" because I can easily ignore those ones.

        • fakedang 15 hours ago

          To quote another frontpage article, it transforms the village idiot into a "Julius".

          • hyhconito 15 hours ago

            Oh shit I just read that and am utterly horrified because I've been through that and am going through it. I have instantly decided to optimise myself to retire as quickly as possible.

            • fakedang 15 hours ago

              Don't worry, you're not alone. I'm in the same boat. :)

    • RajT88 7 hours ago

      My wife has a coworker like this.

      Except I stead of bug reports, he just gets some crap code written and sends it to her assuming it can be dropped in and ran. (It is often wrong)

    • rsynnott 3 hours ago

      But _why_? What’s his motivation for doing this, vs just writing a proper report?

    • nullc 15 hours ago

      > I imagine that could be weaponised quite easily.

      I've been dealing with a vexatious conartist that has been using chatgpt to dump thousands of pages of garbage on the courts and my legal team.

      The plus side is that the output is exceptionally incompetent.

      • vouaobrasil 13 hours ago

        > The plus side is that the output is exceptionally incompetent.

        It won't be for long. This is reminiscent of the development of the first rifles, which often jammed or misfired, and weren't very accurate to a long range. Now look at weapons like a Barrett .50 cal sniper rifle -- that's what AI will look like in 10 years.

        • rsynnott 3 hours ago

          Ah, yes, AI jam tomorrow.

          (Though, perhaps an unusually pessimistic example of the “real soon now, it’ll be usable, we promise” phenomenon; rifles took about 250 years to go from ‘curiosity’ to ‘somewhat useful’).

        • hyhconito 13 hours ago

          I keep hearing this but the current evidence, asymptotic progress and financials say otherwise.

          • vouaobrasil 12 hours ago

            I guess what you are saying would probably have been said by AI skeptics in the 70s, but LLMs provided a quantum leap. Yes, progress is often asymptotic and governed by diminishing returns, but discontinuous breakthrough must also be factored in.

      • Bluestein 15 hours ago

        Goodness gracious. Are we getting to DDoJ? (Denial of Justice by AI?) ...

        ... getting to an "AI arms race" where the team with the better AI "wins" - if nothing else by virtue of merely being able to survive the slop and get to the actual material - and then, of course, argue.-

  • ozim 15 hours ago

    Or it might be state threat actor trying to tire out people and then plant some stuff somewhere in between. Like Jia Tan but 100x more often or then getting their people to “help” cleaning up.

    You just underestimate evil people. We are long past “bored kid in his parents basement in a hoodie”.

    Any piece of OSS code that might end up used by valuable or even not so valuable target is of interest for them.

    • Bluestein 15 hours ago

      I have got to say that - on a first, naive, approach - the whole situation hit me in a very "supply chain attack" way too.-

  • kichik 15 hours ago

    They might be looking for some open-source fame. The contribution to their resume is more important than the contribution to the project.

    • 0points 15 hours ago

      I fixed a single word typo in a doc string in github.com/golang/go, resulted in a CONTRIBUTORS entry and a endless torrent of spam from "headhunters".

    • ramon156 15 hours ago

      This was an issue without LLMs too, and it sucks. GH has a tag for "good first issue" which always gets snatched by someone who only cares about the contribution line. Sometimes they just let it sit for weeks because they forgot that they now have to actually do the work.

  • pimlottc 15 hours ago

    People do things like this because it makes your gamified GitHub metrics go up.

    • marcus0x62 14 hours ago

      I call those drive-by PRs. If you work on an even moderately popular project, you’ll end up with people showing up - who have never contributed before - and submitting patches for stuff like typos in your comments that they found with some auto scanning tool.

      As far as I can tell, there are people whose entire Github activity is submitting PRs like that. It would be one thing if they were diving into a big codebase, trying to learn it, and wanted to submit a patch for some small issue they found to get acquainted with the maintainers, or just contribute a little as they learned, but they drop one patch like that, then move on to the next project.

      I don’t understand the appeal, but to each their own, I guess.

      • exsomet an hour ago

        Genuine curiosity - admittedly these sorts of “contributors” probably aren’t doing it out of a passion for FOSS or any particular project, but if it’s something that fixes an issue (however small) is that actually a net negative for the project?

        Maybe I have some sort of bias, but it feels like a lot of the projects I see specifically request help with these sorts of low-hanging-fruit contributions (lumping typos in with documentation).

    • vouaobrasil 13 hours ago

      This is exactly why it's a bad thing in general to have a single metric or small group of metrics that can be optimized. It always leads to bad actors using technical tools to game them. But we keep making the same mistake.

    • LtWorf 15 hours ago

      You could just do it onto fake projects created for metrics as well, so nothing real is harmed :D

  • janice1999 16 hours ago

    It's people, usually students, trying to pad out their GitHub activity and CVs.

    • Bluestein 15 hours ago

      It's an insidious incentive, in an age where an AI is going to look through your CV, and not much care - or be able to tell the difference ...

      • esperent 15 hours ago

        If the AI was told to care, identifying low grade or insignificant contributions is well within it's capabilities.

        • vouaobrasil 13 hours ago

          Not in 10 years when the contributions become more sophisticated. Like many other scenarios of this time, it's an arms race.

          • esperent 11 hours ago

            If the contributions become sophisticated enough to be actually good, then problem solved, right?

  • codedokode 10 hours ago

    On Hackerone you can get money for a bug report, could that be the reason? I think that the first sentence in report is probably written by a human and the rest by AI. The report is unnecessary wordy, has typical AI formatting and several paragraphs with detailed explanation of "you are absolutely right" are signs of a LLM.

  • Bluestein 16 hours ago

    I am trying - honestly - to wrap my head around it ...

    Who knows. Might be some sort of "distributed attack" against Open Source by some nefarious actor?

    I am still thinking about the "XZ Utils" fiasco. (Not implying they are related, anyhow).-

    • llamaimperative 15 hours ago

      Have you been on the internet? There are plenty of just plain moronic people out there to produce (with the help of LLMs) ample bullshit to be able to clog approximately any quasi-public information channel.

      Such results used to require sophistication and funding, but that is no longer true

      • anon373839 12 hours ago

        I’d like to inject a personal gripe here, namely: the people who take the time to answer questions on Amazon with “I don’t know.” Why.

      • Bluestein 15 hours ago

        Awful. See comment about "morons" upthread ...

    • eddsolves 15 hours ago

      Na, it’s not intentionally malicious - people are just trying to pad their resumes for new roles while unemployed or as students. I did the same (not with AI, but picking up the low hanging easy tasks to add a few lines to my CV years ago).

      • aziaziazi 15 hours ago

        Wonder if you had to give more detail on those experiences during the interviews? How did it go?

      • Bluestein 15 hours ago

        Thanks. Yes, this as a modus, is becoming apparent from the threads.-

    • 0points 15 hours ago

      No need to go there.

      There is a more simple explanation and it is being discussed in the comments.

      Kids trying to farm github fame using LLM:s.

prepend 15 hours ago

It seems like the answer to this is just reputation limits. There’s not a good “programmer whuffie” system, but I imagine requiring an active GitHub account over a year old will reduce instances.

And then post the email address and handle of spam submitters so they are found when potential employers google them.

I will always google applicants as part of the interview process. If I found they had submitted this trash, it would really harm their chances that I hire them.

  • calvinmorrison 15 hours ago

    If someone doesn't have a github ill be more impressed.

    • aziaziazi 15 hours ago

      Would you share why so if you don’t mind?

      • LtWorf 15 hours ago

        He's from silicon valley, parents create github accounts to their children before they are born there. /s

Havoc 14 hours ago

Feels like a prelude to similar issues that will crop up in other areas of society. I’d say most processes are not resilient against this

  • aprilthird2021 14 hours ago

    The worst thing is that AI will make it even harder for human beings to talk to other human beings for support or to fix problems, because the bad faith actors AI-DDOSing every channel will cause businesses to take precautions to avoid spending actual money on responding to AI garbage

ksajadi 15 hours ago

We also get those a lot for our service. I’m not sure if they are AI generated but many are low quality. The problem is as part of our processes we are required to respond and triage every report and keep an audit trail for them.

As a result I started project to use various fine tuned LLMs to do this part for us. I guess this is a case of needing a bigger gun to deal with the guys with gun!

  • Bluestein 15 hours ago

    "The only thing that stops a bad guy with an AI is ... yadda yadda :)

Kelvin506 3 hours ago

Given that LLMs aren't able to properly understand code, would it be feasible and useful to create AI honeypots?

For example, add some dead code that contains an obvious bug (like a buffer overflow). The scanbots catch it, submit the PR, get banned.

bhouston 15 hours ago

Can we also automate the response to these via AI?

Can we have AI bug responses? So by default Github assesses each bug reporting using AI and gives us a suggested response or analysis? If it is a simple fix, just propose a PR? If it is junk or not understandable, say so? And if it is a serious issue confirm that and raise awareness.

I want to move towards self-maintaining Github repositories personally. I should only be involved for the high level tasks or gatekeeping new PRs, not the minor issues that are trivial to address.

We need to not simply fight AI, but rather use it to up level everyone.

  • vouaobrasil 13 hours ago

    > We need to not simply fight AI, but rather use it to up level everyone.

    This is an arms race. And unlike traditional arms, because it involves intellectual capabilities of a machine, there may be no limit to the race. It does not sound like a good world in which everyone is fighting everyone else with advancing AIs that use increasingly more energy to train.

    It's the mechanization of the broken window fallacy.

  • ThrowawayR2 13 hours ago

    People buying LLM services to combat a problem other people created by buying LLM services, incentivizing the latter to try harder by buying more LLM services? A perfect vicious circle. The LLM providers will surely be laughing all the way to the bank.

  • LtWorf 15 hours ago

    > Can we also automate the response to these via AI?

    How to make github issues entirely useless :D

aziaziazi 14 hours ago

Lets start captcha+3FA for bug reports and then every single text field in the web.

hrthagf 14 hours ago

Ironically, CPython itself is already inundated by junk bug reports from core developers themselves, some of whom cash in on "fixing" the fake issues.

Or sometimes bugs are introduced by the endless churn but then attributed to someone who wrote the original bug-free code, which leads to more money and (false) credit for the churn experts.

  • kosayoda 12 hours ago

    Do you have a source for this claim? I'm curious

nullc 15 hours ago

Return to cathedral.

rurban 10 hours ago

I had only positive experiences so far. A few well-written issues and even PR's, driven by fuzzers. Not bad.

I had much worse human reports and even CVE's, which were invalid and absolute trash.

And the recent trend to do sports reports generated by ChatGPT is insulting.

Uptrenda 15 hours ago

Yeah, if you think thats bad I once had someone submit a pull request linting my entire projects code. It was like a single 20k+ line change. I would have had to have read every line changed to make sure it wasn't malicious to merge the changes from that one command. I decided in the end it wasn't worth the effort and rejected it.

  • LtWorf 14 hours ago

    Eh some idiot did a similar thing, 1 commit that was changing basically the whole project.

    I told him to do several commits, and they were just… the same shit but arbitrarily divided into several commits, no logical separation, no way to reject a commit and accept another one.

    I said I wasn't going to accept that crap and he got offended.

anonnon 15 hours ago

This is not a new thing. A decade ago, when I was more active in OSS, I remember occasionally seeing bizarre posts on our mailing lists that had a distinctly Markovian feel that included hallucinated snippets of code. In some cases they used completely unnecessary and out-of-character (for our lists) profanity. These posts were often plausible enough that they usually netted a legitimate reply or two before someone pointed out the OP was a bot.

The goal in some way or another seemed to be spam, either getting access to email addresses, or access to some venue (I guess an issue tracker?) where spam could be posted.

  • 0points 15 hours ago

    I have seen a few cases in the last couple of years of FOSS bug reports where the author has used PVS-Studio or similar static code analysis tools, and makes a big deal about perceived issues, without really understanding what's going on.

    These are not LLM at all, but its the same general issue in that it takes 10 second to generate a report but takes days or weeks to comb through all the noise for the FOSS maintainers.

    Most recently, this one https://github.com/hrydgard/ppsspp/issues/19515

  • Bluestein 15 hours ago

    > Markovian feel that included hallucinated snippets of code

    ... "prior art" for hallucinated (confabulated) code ...

    PS. Sometimes methinks any "moderation"/interaction issue we might encounter nowadays, was faced/dealt with on IRC, before.-

    • avian 15 hours ago

      LLMs/generative AI is a significant change to what we had to deal before. Both in terms of volume and accessibility to the common fuckwit (to borrow the term from another thread) and in terms of moderator time per interaction (because its now significantly harder to recognize this sort of content).

      • Bluestein 15 hours ago

        Certainly. I do agree.-