The imagery generated by artificial intelligence in response to text prompts is often correctly characterized as racist and misogynistic, reflecting the biases of many of the online images upon which the systems trained.
Love it. I get the double person "mistake" often as well (more so with Stable Diffusion than with Flux). It happens most when the latent image size is made larger than the pixel dimensions the model was trained on (1080 pixels). Embrace the flaws of AI!
Love it. I get the double person "mistake" often as well (more so with Stable Diffusion than with Flux). It happens most when the latent image size is made larger than the pixel dimensions the model was trained on (1080 pixels). Embrace the flaws of AI!
It’s a mistake and, perhaps, a commentary. The flaws, we agree, can be quite revealing!