April 16, 2024

AndronETalksNews

AndronETalksNews

College Student Cracks Microsoft’s Bing Chatbot Revealing Secret Instructions

Breitbart

By LUCAS NOLAN

A student at Stanford University has already figured out a way to bypass the safeguards in Microsoft’s recently launched AI-powered Bing search engine and conversational bot. The chatbot revealed its internal codename is “Sydney” and it has been programmed not to generate jokes that are “hurtful” to groups of people or provide answers that violate copyright laws.

Ars Technica reports that a Stanford University student has successfully bypassed the safeguards installed in Microsoft’s “New Bing” AI-powered search engine. The OpenAI-powered chatbot, like the leftist-biased ChatGPT, has an initial prompt that controls its behavior when receiving user input. This initial prompt was found using a “prompt injection attack technique,” which bypasses earlier instructions in a language model prompt and substitutes new ones.

Read more…