行業(yè)英語 學英語,練聽力,上聽力課堂! 注冊 登錄
> 行業(yè)英語 > 金融英語 > 金融時報原文閱讀 >  第356篇

AI聊天機器人:太快“學壞”被“下崗”

所屬教程:金融時報原文閱讀

瀏覽:

2020年07月25日

手機版
掃描二維碼方便學習和分享

AI聊天機器人:太快“學壞”被“下崗”

由微軟開發(fā)的人工智能聊天機器人Tay被設定為十幾歲的女孩,在Twitter上和人聊天、講故事。然而不到24小時,她就被“教壞”了,成為一個反猶太人、性別歧視、種族歧視于一身的“不良少女”。

測試中可能遇到的詞匯和知識:

chatbot 聊天機器人

racist 種族主義的['re?s?st]

xenophobic 仇視外國人的[,zino'fob?k]

backfired 事與愿違的['b?k'fa?rd]

conspiracy 陰謀;共謀[k?n'sp?r?s?]

rogue 流氓;小淘氣[r??g]

totalitarianism 極權主義[,t??t?l?'te?r??n?z?m]

atheism 無神論['e?θ??z(?)m]

auto-generate 自動生成

閱讀即將開始,建議您計算一下閱讀整篇文章所用時間,并對照我們在文章最后給出的參考值來估算您的閱讀速度。

Microsoft pulls Twitter bot Tay after racist tweets(547words)

By Daniel Thomas in London

* * *

Microsoft has been forced to take down an artificially intelligent “chatbot” it has set loose on Twitter after its interactions with humans led it to start tweeting racist, sexist and xenophobic commentary.

The chatbot, named Tay, is a computer designed by Microsoft to respond to questions and conversations on Twitter in an attempt to engage the millennials market in the US.

However, the tech group’s attempts spectacularly backfired after the chatbot was encouraged to use racist slurs, troll a female games developer and to endorse Hitler and conspiracy theories over the 9/11 terrorist attack. A combination of Twitter users, online pranksters, and an insufficiently sensitive filters led it to go rogue and force Microsoft to shut it down within hours of setting it live.

Tweets reported to be from Tay, which have since been deleted, included: “bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we’ve got”, and “Ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism”. It appeared to endorse genocide, deny the Holocaust and refer to one woman as a “stupid whore”.

Given that it was designed to learn from the humans it encountered, Tay’s conversion to extreme racism and genocide may not be the best advertisement for the Twitter community in the week the site celebrated its 10th anniversary.

Tay was developed by Microsoft to experiment with conversational understanding using its artificial intelligence technology. It is aimed at 18 to 24 year olds, according to Microsoft’s online introduction, “through casual and playful conversation”.

Tay is described as a “fam from the internet that’s got zero chill! The more you talk the smarter Tay gets”, with people encouraged to ask it to play games and tell stories and jokes. Instead, many people took to asking controversial questions that were repeated by Tay.

The chatbot has since been stood down, signing off with a jaunty: “Phew. Busy day. Going offline for a while to absorb it all. Chat soon.”

The controversial tweets have been removed from Tay’s timeline.

Microsoft said it would make “some adjustments to Tay”.

“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it,” Microsoft said.

Tay uses data provided in conversations to search for responses and create simple personalised profiles. Microsoft said responses were generated from relevant public data and by using AI and editorial developed by a staff including improvisational comedians. “That data has been modelled, cleaned and filtered by the team developing Tay,” it said.

Interactions between companies and the public on Twitter have a habit of spinning out of control, such as with the misuse of corporate hashtags to highlight bad practices by the company.

Automated feeds have also become a problem in the past. Habitat, the furniture retailer, attempted to use trending topics to boost traffic to its website but inadvertently tweeted about Iranian politics.

Similarly, the New England Patriots celebrated reaching 1m followers by allowing people to auto-generate images of jerseys featuring their Twitter handles, including very offensive ones.

Google has had to tweak its search engine after its auto complete feature generated racist suggestions.

請根據(jù)你所讀到的文章內(nèi)容,完成以下自測題目:

1. Where is the chatbot Tay’s test site?

a. Britain

b. Canada

c. America

d. China

2. What did Microsoft do to Tay after its xenophobic commentary?

a. updated system

b. apologized publicly

c. setting another one live

d. shut it down

3. How old is Twitter now?

a. 15 years old

b. 10 years old

c. 7 years old

d. not mentioned

4. Who has had to tweak its search engine about racist suggestions?

a. Google

b. Similarly

c. Habitat

d. Microsoft

[1] 答案c. America

解釋:微軟表示聊天機器人Tay還在測試階段,在美國主要針對18至24歲之間的用戶。

[2] 答案d. shut it down

解釋:微軟公司不得不暫時將其下線,并表示正針對Tay的行為“進行一些調(diào)整”。

[3] 答案b. 10 years old

解釋:原文中針對Twitter的“10th anniversary”可推斷出它已經(jīng)10歲了。

[4] 答案a. Google

解釋:自動生成的內(nèi)容在過去也產(chǎn)生過麻煩,在文末的舉例中,針對搜索引擎做出調(diào)整的是谷歌公司。


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思石家莊市西環(huán)青年匯英語學習交流群

網(wǎng)站推薦

英語翻譯英語應急口語8000句聽歌學英語英語學習方法

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦