New development policy: code generated by a large language model or similar technology (e.g. ChatGPT, GitHub Copilot) is presumed to be tainted (i.e. of unclear copyright, not fitting NetBSD’s licensing goals) and cannot be committed to NetBSD.

https://www.NetBSD.org/developers/commit-guidelines.html

  • Zos_Kia@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    6
    ·
    5 months ago

    I’m saddened to use this phrase but it is literally virtue signalling. They have no way of knowing lmao

    • best_username_ever@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      It’s actually simple to detect: if the code sucks or is written by a bad programmer, and the docstrings are perfect, it’s AI. I’ve seen this more than once and it never fails.

      • TimeSquirrel@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        5 months ago

        So your results are biased, because you’re not going to see the decent programmers who are just using it to take mundane tasks off their back (like generating boilerplate functions) while staying in control of the logic. You’re only ever going to catch the noobs trying to cheat without fully understanding what it is they’re doing.

        • best_username_ever@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          You’re only ever going to catch the noobs.

          That’s the fucking point. Juniors must learn, not copy paste random stuff. I don’t care what seniors do.