• Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 month ago

    It seems like it’s only copyright infringement when poor people take rich people’s stuff.

    When it’s the other way round, it’s fair use.

  • Uriel238 [all pronouns]@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 month ago

    If letting AI train on other people’s works is unjust enrichment then what the record lables did to creatives through the entire 20th century taking ownership of their work through coercive contracting is extra-unjust enrichment.

    Not saying it isn’t, but it’s not new, and bothersome that we’re only complaining a lot now.

    • FiskFisk33@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 month ago

      don’t misunderstand me now, i really don’t want to defend record companies, but

      legally they made deals and wrote contracts. It’s not really the same thing.

      • Uriel238 [all pronouns]@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        When the labels held an oligopoly on access to the public, it was absolutely coercive when the choice was between having your work published while you got screwed vs. never being known ever.

        This is one of the reasons the labels were so resistant to music on the internet in the first place (which Thomas Dolby and David Bowie were experimenting with in the early 1990s and why they hired US ICE to raid the Dotcom estate in New Zealand because it wasn’t just about MegaUpload being used for piracy sometimes. (PS: That fight is still going on, twelve years later.)

    • stellargmite@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      Yep. And the streaming tech bros collusion with the industry mobsters took it to another level. The people making the art are a mere annoyance to the jerks profiting from it. And yet the ai which they think saves them from this annoyance requires the art be created in the first place. I guess the history of recorded music holds a fair amount to plunder . But art - and even pop music - is an expression and reflection of individuals and wider zeitgeist: actual humanity. I don’t see what value is added when a person creates something semi unique, and a supercomputer burns massive amounts of energy to mimic it. At this stage all of supposed AI is a marketing gimmic to sell things. Corporations once again showing their hostility to humanity.

  • Vince@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Ok, dumb question time. I’m assuming no one has any significant issues, legal or otherwise, with a person studying all Van Gogh paintings, learning how to reproduce them, and using that knowledge to create new, derivative works and even selling them.

    But when this is done with software, it seems wrong. I can’t quite articulate why though. Is it because it takes much less effort? Anyone can press a button and do something that would presumably take the person from the example above years or decades to do? What if the person was somehow super talented and could do it in a week or a day?

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      There’s a simple argument: when a human studies Van Gogh and develops their own style based on it, it’s only a single person with very limited output (they can only paint so much in a single day).

      With AI you can train a model on Van Gogh and similar paintings, and infinitely replicate this knowledge. The output is almost unlimited.

      This means that the skills of every single human artist are suddenly worth less, and the possessions of the rich are suddenly worth more. Wealth concentration is poison for a society, especially when we are still reliant on jobs for survival.

      AI is problematic as long as it shifts power and wealth away from workers.

      • saplyng@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 month ago

        Just as an interesting “what if” scenario - a human making the effort to stylize Van Gogh is okay, and the problem with the AI model is that it can spit out endless results from endless sources.

        What if I made a robot and put the Van Gogh painting AI in it, never releasing in elsewhere. The robot can visualize countless iterations of the piece it wants to make but its only way share it is to actually paint it - much in the same way a human must do the same process.

        Does this scenario devalue human effort? Is it an acceptable use of AI? If so does that mean that the underlying issue with AI isn’t that it exists in the first place but that its distribution is what makes it devalue humanity?

        *This isn’t a “gotcha”, I just want a little discussion!

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          It’s an interesting question! From my point of view, “devaluing human effort” (from an artistic perspective) doesn’t really matter - humans will still be creating new and interesting art. I’m solely concerned about the shift in economic power/leverage, as this is what materially affects artists.

          This means that if your robot creates paintings with an output rate comparable to a human artist, I don’t really see anything wrong with it. The issue arises once you’re surpassing the limits of the individual, as this is where the power starts to shift.

          As an aside, I’m still incredibly fascinated by the capabilities and development of current AI systems. We’ve created almost universal approximators that exhibit complex behavior which was pretty much unthinkable 15-20 years ago (in the sense that it was expected to take much longer to achieve current results). Sadly, like any other invention, this incredible technology is being abused by capitalists and populists for profit and gain at the expense of everyone else.

    • taaz@biglemmowski.win
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      I am guessing the closest opposite argument would be how close it is to outright copying the original work?

    • Dkarma@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      Easier than that:

      Google has been doing this for years for their search engine and no one said a thing. Why do you care now that it’s a different program scanning your media?

    • wewbull@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Dumb question: why do you feel you need to defend billion dollar companies getting even richer off somebody else’s work?

      Also Van Gogh’s works are public domain now.

      • Vince@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I’m not defending any companies, just thinking out loud, but I supposed I can see if that’s how it reads.

        I was just asking myself why it feels wrong when a machine does it vs when a human does it. By your argument, would it be ok if some poor nobody invented and is using this technology vs a billion dollar company? Is that why it feels wrong?

        • tjsauce@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.

          • TheRealKuni@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 month ago

            The issue isn’t the final, individual art pieces, it’s the scale. An AI can produce sub-par art quickly enough to threaten the livelyhood of artists, especially now that there is far too much art for anyone to consume and appreciate. AI art can win attention via spam, drowning out human artists.

            This is literally what people said about photography.

            And they were right, painting became less prolific as photography became available to the masses. People generally don’t get their portrait painted.

            But people also generally don’t go to photo studios to have their picture taken, either, and those used to be in every shopping mall. But now we all have camera phones that adjust lighting and color and focus for us, and we can send a sufficiently decent picture off to be printed and mailed back to us. For those who want it done professionally that option is available and will be higher quality, just like portrait painting is still available, but technology has shrunk those client pools.

            Technology always changes job markets. Generative AI will, just as others have done. People will lose careers they thought were stable, and it will be awful, but this isn’t anything unique to generative AI.

            The only constant is that things change.

    • Cornelius_Wangenheim@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Artists who rips off other great works are still developing their talent and skills. They can then go on to use to make original works. The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.

      There is a very real danger that of ai eviscerating the ability for artists to make a living, making it where very few people will have the financial ability to practice their craft day in and day out, resulting in a dearth of good original art.

      • Dkarma@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        The machine will never produce anything original. It is only capable of mixing together things it has seen in its training set.

        This is patently false and shows you don’t know a single thing about how ai works.

    • bluestribute@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      If someone studies Van Gogh and reproduces images, they’re still not making Van Gogh - they’re making their art inspired by Van Gogh. It still has their quirks and qualms and history behind the brush making it unique. If a computer studies Van Gogh and reproduces those images, it’s reproducing Van Gogh. It has no quirks or qualms or history. It’s just making Van Gogh as if Van Gogh was making Van Gogh.

      • Drewelite@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        There are tons of artists that copy others very closely. There are plenty of examples of A.I. making all kinds of unique and quirky artwork despite drawing from artworks. Feels like you’re backing into the grey area of option so that you can stick to a framework that fits a narrative.

    • aStonedSanta@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      They are copying your intellectual property and digitizing its knowledge. It’s a bit different as it’s PERMANENT. With humans knowledge can be lost, forgotten, or ignored. In these LLMs that’s not an option. Also the skill factor is a big issue imo. It’s very easy to setup an LLM to make AI imagery nowadays.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      The intense hatred for “stealing” content might be blunted if all the subsequent work product goes to the public domain.

      But what do you do when you start getting copywrite struck on your own works, because someone else decided to steal it and claim ownership?

      • Dkarma@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 month ago

        The intense hatred for “stealing” content might be blunted if all the subsequent work product goes to the public domain

        Fun fact…It does!

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 month ago

        People talk about open source models, but there’s no such thing. They are all black boxes where you have no idea what went into them.

        • UnderpantsWeevil@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          People talk about open source models, but there’s no such thing.

          :-/

          Source code isn’t real? Schematics and blue prints don’t exist?

          • Dkarma@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 month ago

            They guy you’re responding to is clueless as to how this actually works.

  • werefreeatlast@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    Have you or a friend used YouTube or reddit in the past 10 years? Then you’re entitled to compensation for the training of AI.

    • xenoclast@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Capitalism is the problem. Greed is the reason. I like that shitty idiots are fighting other shitty idiots because I think it’s funny… but neither parties are good guys

      • mm_maybe@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        Capitalism is precisely the problem, because if the end product were never sold nor used in any commercial capacity, the case for “fair use” would be almost impossible to challenge. They’re betting on judges siding with them in extending a very specific interpretation of fair use that has been successfully applied to digital copying of content for archival and distribution as in e.g. Google Books or the Internet Archive, which is also not air-tight, just precedent.

        Even fair uses of media may not respect the dignity of the creators of works used to create “media synthesizers”. In other words, even if a computer science grad student does a bunch of scraping for their machine learning dissertation, unless they ask and get permission from the creators, their research isn’t upholding the principle of data dignity, which current law doesn’t address at all, but is obviously the real issue upsetting people about “Generative AI”.

        • 【J】【u】【s】【t】【Z】@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 month ago

          I’m not sure I follow that first sentence.

          Fair use is an affirmative, positive defense to liability under the Copyright Act. It only exists as a concept because there is a marketplace for creative work.

          That marketplace, the framers of the Constitution would suggest, only exists because the Constitution allows Congress to grant exclusive licenses to creative Works (i.e., copyright protection). In other words, they viewed creative work as an driven by economics; by securing an exclusive license to the artist, she can make money and create more art.

          I am of the belief that even if there was no marketplace for creative work (no exclusive licensing / no copyright laws), people are still inherently creative and will still make creative things. I think the economic model of creativity enshrined in the Constitution is what gives us stuff like one decent movie followed by four shitty sequels. We have tens of thousands of years of original artworks, creative stories, songs, sculptures, etc. The only thing the copyright clause does, in my view, is concentrate the profit from creativity into the hands of a few successful artists or, more likely, a few large employers, such as George Lucas or Walt Disney, Viacom, Comcast, etc.

          I think this unjust enrichment claim comes as close to anything as data dignity that I’ve heard of. It’s not a lawsuit to enforce a positive legal right, but rather an plea to the court’s equity to correct a manifest injustice and restore the parties to a more just position.

          That the AI companies have been enriched at the detriment of the artists seems obvious. What makes it unjust is that the defendants had no permission and did not pay the artist.

  • blazera@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    AI aint going away, it’s already commonly running and on local machines, and being used covertly.

  • diamond_shield@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 month ago

    I don’t think it is relatively difficult to make “Ethical” AI.

    Simply refer to the sources you used and make everything, from the data used, the models and the weights, of public domain.

    It baffles me as to why they don’t, wouldn’t it just be much simpler?

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      Simply refer to the sources you used

      Source: The Internet.

      Most things are duplicated thousands of times on the Internet. So stating sources would very quickly become a bigger text than almost any answer from an AI.

      But even disregarding that, as an example: Stating that you scraped republican and democrat home sites on a general publicly available site documenting the AI, does not explain which if any was used for answering a political question.

      Your proposal sounds simple, but is probably extremely hard to implement in a useful way.

    • Blaster M@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      So, you want an AI LLM trained to respond like a person from ~180 years ago, with their highly religious and cultural bias from a time so far removed from ours that you would feel offended by its answers, with no knowledge of anything from the past 100+ years? Would you be able to use such a thing in daily life?

      Consider that even school textbooks are copywrited, and people writing open source projects are sometimes offended by their OPEN SOURCE CODE being trained for AI, you basically cut away the ability for the AI model to learn basic human knowledge or even do the thing it’s actually “good” at if you took the full “no offense taken” approach.

      The other part of the problem is, legally speaking, making it where it is forbidden to train on copywrited data opens up a huge window for companies with aggressive copywrite protections to effectively end all fan works of something, or even forbid people from making things with even a hint that their concept was conceived based on their once vaguely hearing about or seeing a copywrited work. How do you legally prove you’ve never been exposed to, even briefly, and thus have never been influenced by something that’s memetically and culturally everywhere, for example?

      As for AI art and music, there are open source pd/cc only models out there, as I call them, “vegan models”. CommonCanvas, for instance. The problem with these models is the lack of subject material available (only 10 million images, which there are a lot more than 10 million things to look at in the world, before considering ways to combine them), and the lack of interest in doing the proper legwork to make sure the AI learns properly through good image tagging, which can take upwards of years to complete. Training AI is very expensive and time consuming (especially the captioning part, due to it being a human task!) and if you don’t have a literal supercomputer you can run for several months at tens of thousands of dollars per month, you aren’t going to make even a small model work in any reasonable amount of time. What makes the big art models good at what they do is both the size of the dataset and the captioning. You need a dataset in the billions.

      For example, if you have never seen any kind of cat before ever, and no one tells you what a cat looks like, and no one tells you how biology works, and you get a single image of a lion, which contains a side-on image, and you are told that is a cat, will you be able to draw it in every perspective angle? No, you won’t. You can guess and infer, but it may not be right. You have the advantage of many, many more data points to draw from in your mind, the human advantage. These AI models don’t have that. You want an AI to draw a lion from every perspective, you need to show it lion images from every perspective so it knows what it looks like.

      As for AI “tracing”, well, that’s not accurate either. AI models do not normally contain training image data in reproducible form in any way. They contain probability matrices of shapes and curves, which mathematically describe the probability of a certain shape in correlation with other concepts alongside it. Take a single one of these “neuron” matrices and graph it, and you get a mess of shapes and curves that vaguely resble a psychodellic abstract art of different parts of that concept… and sometimes other concepts too, because it can and often does use the same “neuron” for other, logically unrelated concepts, but make sense for something that is only interested in defining shapes.

      Most importantly, AI models do not use binary logic like most people are used to with computer logic. It is not a definitive yes/no on anything. It is a floating point number, a varying scale of “maybe”, which allows it to combine and be nuanced with concepts wothout being rigid. This is what makes the AI able to do more than be a tracing machine.

      Where this really comes to is the human factor, the primal fear of “the machine” or “something greater” being able to outcompete the human. Media has given us the concept of Rogue AI destroying civilization since the dawn of the machine age, and it is thoroughly engrained in our culture that smart machines = evil, even though we don’t yet have a reality that far. People forget how much support is required to keep a machine going. They don’t heal themselves or magically keep running forever.

  • 【J】【u】【s】【t】【Z】@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    No, AI does not create new derivative transformative works. Copyright law is very clear that the thing that is copyrightable is that modicum of creativity, reduced to a tangible medium of expression, that society must encourage and protect.

    Derivative works need even more creativity to be protectable than original works because it has to be so newly creative as to be a different work, transformative, even though the original may still be very recognizable.

    An AI system does not have creativity. At best, it could mimic someone who is creative, but it could never have creativity on its own. It is generative, not creative.

    It’s like that monkey that took a nice picture, but the picture was not copyrightable because the person seeking to enforce the copyright didn’t create the work. It’s creativity that the Constitution seeks to encourage by the copyright clause.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 month ago

      it has to be so newly creative as to be a different work, even though the original may still be recognizable

      Your definition implies Andy Warhol wasn’t creative.

      • 【J】【u】【s】【t】【Z】@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 month ago

        I think they are considered derivative, and are not protected. Not that he wasn’t creative, just that his work wasn’t so creative to be independently copyrightable. I’m a little rusty on my IP law.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      You can make new derivative work without being creative. Just look at all the YouTubers copying each other.

      • SpaceCowboy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 month ago

        Many of those those reaction videos on YouTube are actually infringing on copyright. Just that the videos they’re reacting to aren’t made by people with deep enough pockets to sue them so they get away with it.

    • ArmokGoB@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      The AI doesn’t need creativity because the “A” in “AI” stands for “artificial,” not “autonomous.” It’s a tool. Someone is controlling the output by setting the input parameters.