Yes, my comment applied more to photorealistic AI images.
Illustrations are a different beast, where people have much more creative freedom… and that video is reasonably good at explaining that, but I find it falls short at some points:
AI image generators don’t “consult” source images to generate an output. At training time, they extract patterns from the training set, which is never again used for generation, only the extracted patterns are.
Modern AI generators are increasingly good at generating text. They still struggle a bit, but compared to a year ago, they can now generate headlines and large text correctly, while the mess gets shoved into smaller and less important text. This isn’t all that different from human artists adding “filler gibberish” text.
Layers. While a naive (and cheaper) approach to AI generation doesn’t use layers, there are generators which do use layers, and can keep object consistency across obscured or cut-off sections.
As AI generators advance, all these differences are likely to disappear… by following this same criticisms to fix things.
Yes, my comment applied more to photorealistic AI images.
Illustrations are a different beast, where people have much more creative freedom… and that video is reasonably good at explaining that, but I find it falls short at some points:
As AI generators advance, all these differences are likely to disappear… by following this same criticisms to fix things.