MICROSOFT'S caution was informed by the company's experience nearly seven years ago when it introduced a chatbot named TAY.
Users almost immediately found ways to make it spew racist, sexist and other offensive language. The company took Tay down within a day, never to release it again.
Much of the training on the new chatbot was focused on protecting against that kind of harmful response, or scenarios that invoked violence, such as planning an attack on a school.
At the Bing launch recently, Sarah Bird, a leader in Microsoft's responsible A.I. efforts, said the company had developed a new way to use generative tools to identify risks and train how the chatbot responded.
''The model pretends to be an adversarial user to conduct thousands of different potentially harmful conversations with Bing to see how it reacts,'' Ms. Bird said. She said Microsoft's tools classified those to ''understand gaps in the system.''
Some conversations shared online have shown how the chatbot has a sizable capacity for producing bizarre responses. It has aggressively confessed its love, scolded users for being ''disrespectful and annoying'' and declared that it may be sentient.
In the first week of public use, Microsoft said, it found that in ''long extended chat sessions of 15 or more questions, Bing can become repetitive or be prompted to give responses that are not necessarily helpful or in line with our designed tone.''
The issue of chatbot responses that veer into strange territory is widely known among researchers.
In an interview recently, Sam Altman, the chief executive of OpenAI, said improving what's known as ''alignment'' - how the responses safely reflect a user's will - was ''one of these must-solve problems.''
''We really need these tools to act in accordance with their users' will and preferences and not go do other things,'' Mr. Altman said.
He said that the problem was ''really hard'' and that while they had made great progress, ''we'll need to find much more powerful techniques in the future.''
In November, Meta, the owner of Facebook, unveiled its own chatbot, Galactica. Designed for scientific research, it could instantly write its own articles, solve math problems and generate computer code.
Like the Bing chatbot, it also made things up and spun tall tales. Three days later, after being inundated with complaints, Meta removed Galactica from the Internet.
Earlier last year, Meta released another chatbot, BlenderBot, Meta's chief scientist, Yann LeCun, said the bot had never caught on because the company worked so hard to make sure that it would not produce offensive material.
''It was panned by people who tried it,'' he said. ''They said it was stupid and kind of boring. It was boring because it was made safe.''
The World Students Society thanks Karen Weise, and Cade Metz. Karen Weise reported from Seattle, Cade Metz from San Francisco, Kevin Roose from San Francisco.
0 comments:
Post a Comment
Grace A Comment!