英語(yǔ)閱讀 學(xué)英語(yǔ),練聽(tīng)力,上聽(tīng)力課堂! 注冊(cè) 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

警惕人工智能的失控風(fēng)險(xiǎn)

所屬教程:科學(xué)前沿

瀏覽:

2017年04月05日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
As an experiment, Tunde Olanrewaju messed around one day with the Wikipedia entry of his employer, McKinsey. He edited the page to say that he had founded the consultancy firm. A friend took a screenshot to preserve the revised record.

作為一項(xiàng)實(shí)驗(yàn),通德•奧蘭雷瓦朱(Tunde Olanrewaju)有一天給維基百科(Wikipedia)關(guān)于他雇主的條目——麥肯錫(McKinsey)——搗了點(diǎn)亂。他編輯了該頁(yè)面,說(shuō)自己創(chuàng)辦了這家咨詢(xún)公司。一位朋友將修改后的記錄截圖保存了。

Within minutes, Mr Olanrewaju received an email from Wikipedia saying that his edit had been rejected and that the true founder’s name had been restored. Almost certainly, one of Wikipedia’s computer bots that police the site’s 40m articles had spotted, checked and corrected his entry.

幾分鐘內(nèi),奧蘭雷瓦朱收到來(lái)自維基百科的電子郵件,告知他的編輯操作被拒絕,麥肯錫真正創(chuàng)始人的名字已被恢復(fù)。幾乎可以肯定,管理維基百科網(wǎng)站上4000萬(wàn)篇條目的機(jī)器人(bot)之一已發(fā)現(xiàn)、核對(duì)并糾正了被他編輯的條目。

It is reassuring to know that an army of such clever algorithms is patrolling the frontline of truthfulness — and can outsmart a senior partner in McKinsey’s digital practice. In 2014, bots were responsible for about 15 per cent of all edits made on Wikipedia.

我們非常欣慰地得知,大量如此聰明的算法正巡邏在保衛(wèi)真實(shí)性的前線(xiàn)——并且可以比麥肯錫旗下數(shù)字業(yè)務(wù)的資深合伙人更聰明。2014年,維基百科上約15%的編輯量是由機(jī)器人完成的。

But, as is the way of the world, algos can be used for offence as well as defence. And sometimes they can interact with each other in unintended and unpredictable ways. The need to understand such interactions is becoming ever more urgent as algorithms become so central in areas as varied as social media, financial markets, cyber security, autonomous weapons systems and networks of self-driving cars.

但是,世上的事情都是如此:算法既可以用于攻擊,也可以用于防御。有時(shí),算法之間可以通過(guò)非故意和不可預(yù)測(cè)的方式相互作用。由于算法在諸如社交媒體、金融市場(chǎng)、網(wǎng)絡(luò)安全、自主武器系統(tǒng)(AWS)和自動(dòng)駕駛汽車(chē)網(wǎng)絡(luò)等不同領(lǐng)域發(fā)揮如此核心的作用,人類(lèi)越來(lái)越迫切地需要理解這種相互作用。

A study published last month in the research journal Plos One, analysing the use of bots on Wikipedia over a decade, found that even those designed for wholly benign purposes could spend years duelling with each other.

今年2月研究期刊《公共科學(xué)圖書(shū)館•綜合》(PLoS ONE)發(fā)表的一篇論文發(fā)現(xiàn),即使那些出于完全善良意愿而設(shè)計(jì)的機(jī)器人,也可能會(huì)花費(fèi)數(shù)年時(shí)間彼此爭(zhēng)斗。這篇論文分析了十年來(lái)維基百科上機(jī)器人的使用情況。

In one such battle, Xqbot and Darknessbot disputed 3,629 entries, undoing and correcting the other’s edits on subjects ranging from Alexander the Great to Aston Villa football club.

在一次這樣的爭(zhēng)斗中,Xqbot和Darknessbot在3629個(gè)條目——從亞歷山大大帝(Alexander the Great)到阿斯頓維拉(Aston Villa)足球俱樂(lè)部——上發(fā)生了沖突,反復(fù)撤消和更正對(duì)方的編輯結(jié)果。

The authors, from the Oxford Internet Institute and the Alan Turing Institute, were surprised by the findings, concluding that we need to pay far more attention to these bot-on-bot interactions. “We know very little about the life and evolution of our digital minions.”

來(lái)自牛津互聯(lián)網(wǎng)學(xué)院(Oxford Internet Institute)和圖靈研究所(Alan Turing Institute)的幾位論文作者對(duì)這些發(fā)現(xiàn)感到吃驚。他們得出結(jié)論,我們需要對(duì)這些機(jī)器人之間的相互作用給予更多關(guān)注。“我們對(duì)我們的數(shù)字小黃人的生活和進(jìn)化知之甚少。”

Wikipedia’s bot ecosystem is gated and monitored. But that is not the case in many other reaches of the internet where malevolent bots, often working in collaborative botnets, can run wild.

維基百科的機(jī)器人生態(tài)系統(tǒng)有門(mén)禁,受到監(jiān)控。但在互聯(lián)網(wǎng)所觸及的許多其他領(lǐng)域,情況并非如此:惡意機(jī)器人——通常結(jié)成協(xié)作的僵尸網(wǎng)絡(luò)(botnet)來(lái)工作——可能會(huì)失控。

The authors highlighted the dangers of such bots mimicking humans on social media to “spread political propaganda or influence public discourse”. Such is the threat of digital manipulation that a group of European experts has even questioned whether democracy can survive the era of Big Data and Artificial Intelligence.

幾位作者突出強(qiáng)調(diào)了這種機(jī)器人模仿人類(lèi)、在社交媒體上“傳播政治宣傳言論或影響公共話(huà)語(yǔ)”的危險(xiǎn)。數(shù)字操縱的威脅如此嚴(yán)峻,以致一群歐洲專(zhuān)家甚至質(zhì)疑民主在大數(shù)據(jù)和人工智能時(shí)代還有沒(méi)有活路。

It may not be too much of an exaggeration to say we are reaching a critical juncture. Is truth, in some senses, being electronically determined? Are we, as the European academics fear, becoming the “digital slaves” of our one-time “digital minions”? The scale, speed and efficiency of some of these algorithmic interactions are reaching a level of complexity beyond human comprehension.

要說(shuō)我們正在逼近一個(gè)緊要關(guān)頭,也許不太夸張。在某種意義上,真理是否正由電子手段確定?我們是否正如歐洲學(xué)者所害怕的那樣,正成為曾經(jīng)聽(tīng)命于我們的“數(shù)字小黃人”的“數(shù)字奴隸”?一些算法之間交互作用的規(guī)模、速度和效率開(kāi)始達(dá)到人類(lèi)無(wú)法理解的復(fù)雜程度。

If you really want to scare yourself on a dark winter’s night you should read Susan Blackmore on the subject. The psychologist has argued that, by creating such computer algorithms we may have inadvertently unleashed a “third replicator”, which she originally called a teme, later modified to treme.

如果你真的想在冬日暗夜里嚇唬自己的話(huà),你應(yīng)該讀一讀蘇珊•布萊克莫爾(Susan Blackmore)有關(guān)這個(gè)題材的作品。這位心理學(xué)家認(rèn)為,通過(guò)創(chuàng)建這樣的計(jì)算機(jī)算法,我們也許已不經(jīng)意地釋放出一個(gè)“第三復(fù)制因子”——她最初稱(chēng)之為技因(teme),后來(lái)改稱(chēng)為treme。

The first replicators were genes that determined our biological evolution. The second were human memes, such as language, writing and money, that accelerated cultural evolution. But now, she believes, our memes are being superseded by non-human tremes, which fit her definition of a replicator as being “information that can be copied with variation and selection”.

第一復(fù)制因子是決定我們生物進(jìn)化的基因。第二復(fù)制因子是人類(lèi)的迷因(meme)——如語(yǔ)言、寫(xiě)作和金錢(qián)——迷因加速了文化演變。但現(xiàn)在,她認(rèn)為,我們的迷因正在被非人類(lèi)的treme取代,treme符合她對(duì)于復(fù)制因子的定義,即“可以有變化和有選擇地復(fù)制的信息”。

“We humans are being transformed by new technologies,” she said in a recent lecture. “We have let loose the most phenomenal power.”

“我們?nèi)祟?lèi)正在被新技術(shù)所改造,”她在最近的一次講座中說(shuō),“我們把一種最驚人的力量放出來(lái)了。”

For the moment, Prof Blackmore’s theory remains on the fringes of academic debate. Tremes may be an interesting concept, says Stephen Roberts, professor of machine learning at the University of Oxford, but he does not think we have lost control.

目前,布萊克莫爾教授的理論仍游離于學(xué)術(shù)辯論的邊緣。牛津大學(xué)(University of Oxford)機(jī)器學(xué)習(xí)教授斯蒂芬•羅伯茨(Stephen Roberts)說(shuō),Treme或許是個(gè)有趣的概念,但他認(rèn)為,我們并未失去控制權(quán)。

“There would be a lot of negative consequences of AI algos getting out of hand,” he says. “But we are a long way from that right now.”

“人工智能(AI)算法失控將產(chǎn)生很多負(fù)面后果,”他說(shuō),“但現(xiàn)在,我們距離這個(gè)局面還有很遠(yuǎn)的距離。”

The more immediate concern is that political and commercial interests have learnt to “hack society”, as he puts it. “Falsehoods can be replicated as easily as truth. We can be manipulated as individuals and groups.”

更緊迫的問(wèn)題是,用他的話(huà)說(shuō),政治和商業(yè)利益集團(tuán)已學(xué)會(huì)了“侵入社會(huì)”。 “謊言可以像真理一樣輕易地復(fù)制。我們作為個(gè)人和團(tuán)體,都可能被操縱。”

His solution? To establish the knowledge equivalent of the Millennium Seed Bank, which aims to preserve plant life at risk from extinction.

他的解決方案是什么?為知識(shí)建立類(lèi)似千年種子銀行(Millennium Seed Bank)那樣的保護(hù)計(jì)劃。千年種子銀行旨在保護(hù)瀕危植物免于滅絕。

“As we de-speciate the world we are trying to preserve these species’ DNA. As truth becomes endangered we have the same obligation to record facts.”

“隨著人類(lèi)讓這個(gè)世界上的物種減少,我們?cè)谠噲D保護(hù)這些物種的DNA。隨著真相變得瀕危,我們有同樣的義務(wù)記錄下事實(shí)。”

But, as we have seen with Wikipedia, that is not always such a simple task.

但是,正如我們?cè)诰S基百科中所看到的情況那樣,這并不總是那么簡(jiǎn)單的任務(wù)。
 


用戶(hù)搜索

瘋狂英語(yǔ) 英語(yǔ)語(yǔ)法 新概念英語(yǔ) 走遍美國(guó) 四級(jí)聽(tīng)力 英語(yǔ)音標(biāo) 英語(yǔ)入門(mén) 發(fā)音 美語(yǔ) 四級(jí) 新東方 七年級(jí) 賴(lài)世雄 zero是什么意思溫州市菱藕社區(qū)英語(yǔ)學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦