• Fushuan [he/him]@lemm.ee
      link
      fedilink
      English
      arrow-up
      6
      ·
      9 months ago

      Actually I prefer if individual users pirating being considere fair use, but corporation pirating not be considered fair use. So them pirating is not fine but us pirating should be.

    • jaden@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      9 months ago

      Yeah too much of this thread is so hypocritical, but either free to copy stuff should be free or it shouldn’t.

  • bartolomeo@suppo.fi
    link
    fedilink
    English
    arrow-up
    64
    ·
    9 months ago

    “We didn’t do it, and if we did it was fair use, and if it wasn’t progress will be hampered if rules and regulations are too strict.”

  • onlinepersona@programming.dev
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    3
    ·
    9 months ago

    I do wonder how it shakes out. If the case establishes that a license to use the material should be acquired for copyrighted material, then maybe the license I’m setting on comments might bring commercial AI companies in hot water too - which I’d love. Opensource AI models FTW

    CC BY-NC-SA 4.0

    • jarfil@beehaw.org
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      9 months ago

      That license would require the AI model to only output content under the same license. Not sure if you realize, but commercial use is part of the OpenSource definition:

      https://opensource.org/osd/

      Your content would just get filtered out from any training dataset.

      As for going against commercial companies… maybe you are a lawyer, otherwise good luck paying the fees.

  • rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    26
    ·
    edit-2
    9 months ago

    AI is just too much of a hype. Every company invests millions into AI and all new products need to “have AI”. And then everybody also needs to file lawsuits. I mean rightly so if Meta just pirated the books, but that’s not a problem with AI, but plain old piracy.

    I was pretty sure OpenAI or Meta didn’t license gigabytes of books correctly for use in their commercial products. Nice that Meta now admitted to it. I hope their " Fair Use" argument works and in the future we can all “train AI” with our “research dataset” of 40GB of ebooks. Maybe I’m even going to buy another harddisk and see if I can train an AI on 6 TB of tv series, all marvel movies and a broad mp3 collection.

    Btw, there was no denying anyways. Meta wrote a scientific paper about their LLaMA model in march of last year. And they clearly listed all of their sources, including Books3. Other companies aren’t that transparent. And even less so as of today.

  • msgraves@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    18
    ·
    9 months ago

    ohno my copyright!!! How will the publisher megacorps now make a record quarter??? Think of the shareholders!

    • Waluigis_Talking_Buttplug@lemmy.world
      link
      fedilink
      English
      arrow-up
      53
      arrow-down
      6
      ·
      9 months ago

      That’s not the take away you should be having here, it’s that a mega Corp felt that they should be allowed to create new content from someone else’s work, both without their permission and without paying

      • msgraves@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        9 months ago

        ok, fair; but do consider the context that the models are open weight. You can download them and use them for free.

        There is a slight catch though which I’m very annoyed at: it’s not actually Apache. It’s this weird license where you can use the model commercially up until you have 700M Monthly users, which then you have to request a custom license from meta. ok, I kinda understand them not wanting companies like bytedance or google using their models just like that, but Mistral has their models on Apache-2.0 open weight so the context should definitely be reconsidered, especially for llama3.

        It’s kind of a thing right now- publishers don’t want models trained on their books, „because it breaks copyright“ even though the model doesn’t actually remember copyrighted passages from the book. Many arguments hinge on the publishers being mad that you can prompt the model to repeat a copyrighted passage, which it can do. IMO this is a bullshit reason

        anyway, will be an interesting two years as (hopefully) copyright will get turned inside out :)

  • howrar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    I’m pretty sure “admits” implies an attempt to hide it. They’ve explicitly said in the model’s initial publication that the training set includes Books3.

  • dumpsterlid@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    10
    ·
    edit-2
    9 months ago

    What a bunch of losers, thinking they are making the future…… by stealing from as many artists as they can? How do you convince yourself you are doing the right thing when what you are doing is scaling up the theft of art from small artists to a tech company sized operation?

    And how much oxygen has been wasted over the years by music companies pushing the narrative that “stealing” from artists with torrenting is wrong? This is so much worse than stealing (and a million times worse than torrenting) though because the point of the theft is to destroy the livelihood of the artist who was stolen from and turn their art into a cheap commodity that can be sold as a service with the artist seeing none of the monetary or cultural reward for their work.

    • Kissaki@feddit.de
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      Did you just make a contradictory argument for both sides?

      Is your distinction that piracy by individuals gives cultural recognition while that of corporations doesn’t?

      If you think piracy is warranted, at the cost of artists/creators, how is a generalized AI that makes it available and more accessible as a cultural abstracted good different?

      • nevernevermore@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        9 months ago

        I’m going to imagine it’s because that cultural abstracted good is then put behind a pay wall, which OP will theb also pirate, thus fulfilling the prophesy.

      • dumpsterlid@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        edit-2
        9 months ago

        Because I don’t see a strong argument for piracy coming at a direct, immutable cost to artists. I also don’t see a strong argument that piracy reduces the chance fans will pay for art when the art is made decently easy to purchase and is being sold at a reasonable price. Of course there are complexities to this discussion but ultimately when you compare it to massive corporations wholesale stealing massive amounts of works of art with the specific intention of undercutting and destroying the value of said art by attempting to commodify it I think the difference is pretty clear. One of these things is a morally arguable choice by one individual, the other is class warfare by the rich.

        Joe shmo torrents an album from a band they like, maybe they buy the album in the future or go to a band concert and buy merch. Joe shmo hasn’t mined some economic gain out of a band and then moved on, Joe shmo has become more of a committed fan because they love the album. Meta steals from a band so that they can create an algorithm that produces knockoff versions of the band’s music that Meta can sell to say a company making a commercial who wants music in that style but would prefer not to pay an actual human artist an actual fair price for the music. These are not the same.

        (AI doesn’t create convincing fake songs yet necessarily, but you get my point as it applies to other art that AI can create convincing examples of, books and writing being a prime example)

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      What a bunch of losers, thinking they are making the future…… by stealing from as many artists as they can?

      Are you aware of which community this is posted in?

      • Waluigis_Talking_Buttplug@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        9 months ago

        Meta stealing intellectual property and utilizing it for corporate gain is not the same as normal users pirating content. They are so far apart that it warrants its own discussion and cannot be lumped in together.

      • dumpsterlid@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        9 months ago

        I didn’t realize at first, my bad. I realize that makes a lot of my post redundant but I think my point still stands.

        So much hypocrisy that a massive corporation can actually steal like this and it is more socially acceptable than torrenting.

        • Waluigis_Talking_Buttplug@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          1
          ·
          9 months ago

          And that’s the issue I in particular have. It’s a double standard and not only that, they’re using it to generate money for their own tools

          It’s not the same as some kid pirating photoshop to play around with, or a couple who is curious about GOT and want to watch it without paying HBO.

          This is a separate issue and I hate that this place is so reddit like that trying to talk about it gets “hurrr dur I guess you’re mad because AI and meta are just the current hate train circle jerk hurrr i form my own opinions hurr”

          Like, no, I’m upset because this is a whole new topic of piracy use.

          • j4k3@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            9 months ago

            I’m not upset because I think it is totally irrelevant because training AI is not reproducing any works and it is no different than a person who reads or sees said works talking about or creating in the style of said works.

            At the core, this amounts to thought policing as the final distilled issue if this is given legal precedent. It would be a massive regression of fundamental human rights with terrible long term implications. This is no different than how allowing companies to own your data and manipulate you has directly lead to a massive regression of human rights over the last 25 years. Reacting like foolish luddites to a massive change that seems novel in the moment will have far reaching consequences most people lack the fundamental logic skills to put together in their minds.

            In practice, offline AI is like having most of the knowledge of the internet readily available for your own private use in a way that is custom tailored to each individual. I’m actually running large models on my own computer daily. This is not hypothetical, or hyperbole; this is empirical.