We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.::Artists and researchers are exposing copyrighted material hidden within A.I. tools, raising fresh legal questions.
We Asked A.I. to Create the Joker. It Generated a Copyrighted Image.::Artists and researchers are exposing copyrighted material hidden within A.I. tools, raising fresh legal questions.
I have a question for the author of this stupid fucking article. What the fuck do you think half of the artists on the planet do? They use copyrighted images as reference when drawing fictional characters and they often end up looking very similar to the original. There are thousands of people on social media that sell these drawings on a regular basis.
That’s not the point. If Joe the artist makes $25,000 a year breaking copyright, that doesn’t mean copyright is now meaningless.
Yes but image copyright is fickle thing, because at what point does it become not a copyrighted image? I have to reference the “Ship of Theseus” thought experiment, because it does sort of apply here. A fictional character cannot be drawn from a first hand perspective, so some sort of copyrighted image HAS to be used as a reference. So where does one draw the line?
Actually humans are quite capable of creating art while having never seen art in their whole life.
Utter nonsense. Have you ever looked at the history of art? It’s all a slow incremental crawl based on previous efforts. Nothing comes from nothing.
So you’re saying it’s impossible for humans to create art without first seeing art? That creates a problem of how to first piece of art came from.
Also have you never drawn anything during class as a kid? Because I definitely did, my old notebooks are full of various drawings and I had no interest in art outside of boredom in class. Art comes from imagination, not nothing.
They looked at nature and crudely copied that. They didn’t start drawing Mickey Mouse on day one.
Humans can look at landscape, people, animals or even some random bullshit, add a dash of creativity and imagination to transform that into something beautiful. Or they can skip the first part and just draw something from their imagination, like I can draw a crazy foam monster while having never seen one.
On the other hand you can feed an AI millions of hours of public cctv footage and you will never get anything other than variations of cctv footage. AI don’t have creativity and can’t create art out of landscape, animals, people etc.
Which is why blind people are so amazing at drawing…
You are recombining pattern you have seen before, “crazy”, “foam”, “monster”, those all have a certain look that your brain got trained on, you are simply remixing them. An AI can do exactly the same. The fact that there are words for those concepts should be enough to tell you that those ideas are not original.
Actually monsters don’t exist in real life but OK, here’s a challage for you: Train an AI on images of foam and see if it can come up with an drawing of a foam monster.
I can tell you what you are going to get though: pictures of foam. Not drawings, not art. The human brain doesn’t just remix existing input, creativity is a thing.
Also blind painters are absolutely a thing that exists.
It’s a bit different for MidjourneyV6, previous AI models would create their own original images based on patterns learned from the data. MidjourneyV6 on the other side reproduces the original images to such a degree where they look identical to the originals for the average observer, you have to see them side by side to even spot the differences at all. DALLE3 has that problem as well, but to a much lesser degree.
That means there is something going wrong in the training, e.g. some images end up being duplicated so often in the training data that the AI remembers them completely. Normally that should be reduced or avoided by filtering out duplicate images, but that seems to not be happening or the images slip through due to small changes (e.g. size or crop will be different on different websites).
Note this doesn’t just impact exact duplication, it also impacts remixing, e.g. when you tell it to draw Joker doing some task, you’ll get Joaquin Phoenix’s Arthur Fleck, not some random guy with clown features.
All of this happens with very simple prompts that do not contain all those very specific details.
In AI’s defense: All the examples I have seen so far are from press releases of movie stills. So they naturally end up getting copied all over the place and claiming copyright violation for your own material that you released to be reused by the press wouldn’t fly either. But either way, Midjourney is still misbehaving here and needs to be fixed.
More broadly speaking, I think it would be a good time to move away training those AI almost exclusively on images and start training them on video. Not just to be able to reproduce video, but so that the AI get a more holistic understanding of how the world works. At the moment all its knowledge is based on deliberate photo moments and there are very large gaps in its understanding.