AI being trained on images of child sexual abuse
A study by the Stanford Internet Observatory discovered 3,226 suspected child sexual abuse images within LAION, an open-source AI dataset used to train text-to-image generators like Stable Diffusion. This finding suggests that AI models may inadvertently generate explicit content involving children due to such training data. The researchers recommend that entities using LAION-derived datasets should delete or clean them and that models lacking safety measures should be deprecated. They also advise against training AI programs on erotic images alongside material depicting children.