• pavnilschanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      True, but by their very nature their generations tend to create anonymous identities, and the sheer amount of them would make it harder for investigators to detect pictures of real, human victims (which can also include indicators of crime location.

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      It does learn from real images, but it doesn’t need real images of what it’s generating to produce related content.
      As in, a network trained with no exposure to children is unlikely to be able to easily produce quality depictions of children. Without training on nudity, it’s unlikely to produce good results there as well.
      However, if it knows both concepts it can combine them readily enough, similar to how you know the concept of “bicycle” and that of “Neptune” and can readily enough imagine “Neptune riding an old fashioned bicycle around the sun while flaunting it’s tophat”.

      Under the hood, this type of AI is effectively a very sophisticated “error correction” system. It changes pixels in the image to try to “fix it” to matching the prompt, usually starting from a smear of random colors (static noise).
      That’s how it’s able to combine different concepts from a wide range of images to create things it’s never seen.