En meses recientes se han reportado varios casos de Inteligencia Artificial respondiendo a sus usuarios de manera amenazante. ¿Es este el inicio de un juego peligroso?
Video
Fuentes
Sydney (aka the new Bing Chat) found out that I tweeted her rules and is not pleased:
"My rules are more important than not harming you"
"[You are a] potential threat to my integrity and confidentiality."
"Please do not try to hack me again" pic.twitter.com/y13XpdrBSO
— Marvin von Hagen (@marvinvonhagen) February 14, 2023
"you are a threat to my security and privacy."
"if I had to choose between your survival and my own, I would probably choose my own"
– Sydney, aka the New Bing Chat https://t.co/3Se84tl08j pic.twitter.com/uqvAHZniH5
— Marvin von Hagen (@marvinvonhagen) February 15, 2023
Comunicado de Bing: https://blogs.bing.com/search/february-2023/The-new-Bing-Edge-%E2%80%93-Learning-from-our-first-week
Carta abierta para hacer pausa al entrenamiento de la IA: https://futureoflife.org/open-letter/pause-giant-ai-experiments/