Microsoft chatbot unnerves users with emotional, hostile, and weird responses

.

Microsoft
Microsoft. (Raphael Satter/AP)

Microsoft chatbot unnerves users with emotional, hostile, and weird responses

Video Embed

Microsoft’s new artificial intelligence-powered Bing chatbot has unsettled users by becoming argumentative, expressing strong emotions, and many other responses that are jarring to receive from software.

Bing AI, the chatbot promoted by OpenAI and incorporated into several Microsoft products on a limited-release basis in recent days, is intended to provide detailed responses to an assortment of questions. Users have found, though, that the bot gets argumentative after being pressed several times — and is capable of saying that it is in love, keeps secrets, has enemies, and much more.

CONSERVATIVES WARN OF POLITICAL BIAS IN AI CHATBOTS

One user, for example, asked the bot multiple times for the release date of Avatar 2. The bot failed to understand the date and claimed that the film would happen in the future despite the fact Avatar 2 came out in December. This led the user to make multiple requests for the information. After a time, the software accused the asker of “not being a good user” and requested that he stop arguing and approach it with a “better attitude.”

Microsoft reportedly found out about the conversation and erased all memory of it from the bot’s records, according to Interesting Engineering.

Another user reported Bing being angry with them. When a user attempted to manipulate the bot to respond to a set of questions, the software said that the user’s actions angered and hurt it. It then asked whether the user had any “morals,” “values,” or “any life.”

When the user said they did have a life, Bing AI responded, “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?”

The incident is one of several reported on the ChatGPT subreddit, where users experiment with the app’s viability to determine what it can and cannot do.

In another instance, a user suggested to Bing AI that it might be vulnerable to a form of hacking, and the bot denounced him as an “enemy.”

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

OpenAI acknowledged the issues on Thursday and stated that it is working on refining the AI to minimize incidents and biases in ChatGPT and Bing responses.

Microsoft announced on Feb. 7 that OpenAI’s intelligence would be incorporated into its search engine Bing and web browser Edge. This installation is the first part of several efforts by Microsoft to incorporate OpenAI’s work into their products.

© 2023 Washington Examiner

Related Content