英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

人工智能已經(jīng)“自學(xué)成才”了,人怎么辦?

所屬教程:科學(xué)前沿

瀏覽:

2017年10月31日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
Elon Musk once described the sensational advances in artificial intelligence as “summoning the demon”. Boy, how the demon can play Go.

埃隆•馬斯克(Elon Musk)曾說,人類在人工智能(AI)領(lǐng)域取得的巨大進(jìn)步如同在“召喚惡魔”。小哥,惡魔可不會(huì)下圍棋。

The AI company DeepMind announced last week it had developed an algorithm capable of excelling at the ancient Chinese board game. The big deal is that this algorithm, called AlphaGo Zero, is completely self-taught. It was armed only with the rules of the game — and zero human input.

人工智能公司DeepMind最近宣布,其已研發(fā)出了一種擅長玩這種古老的中國棋盤游戲的算法。最牛的是,這款叫做AlphaGo Zero (AGZ)的算法完全是“自學(xué)成才”的。研究人員只給它輸入了下圍棋的規(guī)則,而沒有輸入任何人類知識(shí)和經(jīng)驗(yàn)。

AlphaGo, its predecessor, was trained on data from thousands of games played by human competitors. The two algorithms went to war, and AGZ triumphed 100-nil. In other words — put this up in neon lights — disregarding human intellect allowed AGZ to become a supreme exponent of its art.

該算法的前身AlphaGo使用人類棋手玩過的數(shù)千盤棋所積累起來的數(shù)據(jù)培訓(xùn)過。兩種算法對(duì)弈,結(jié)果AGZ以100比零取勝。這樣說來,完全不借助人類智慧,AGZ自己就可以成為頂尖的棋藝高手。

While DeepMind is the outfit most likely to feed Mr Musk’s fevered nightmares, machine autonomy is on the rise elsewhere. In January, researchers at Carnegie-Mellon University unveiled an algorithm capable of beating the best human poker players. The machine, called Libratus, racked up nearly $2m in chips against top-ranked professionals of Heads-Up No-Limit Texas Hold ‘em, a challenging version of the card game. Flesh-and-blood rivals described being outbluffed by a machine as “demoralising”. Again, Libratus improved its game by detecting and patching its own weaknesses, rather than borrowing from human intuition.

雖說DeepMind最有可能害得馬斯克噩夢不斷,但在其他地方,機(jī)器自主方面的研究也取得了很大進(jìn)展。今年1月,卡耐基梅隆大學(xué)(Carnegie Mellon University)的研究人員公布了一種能夠打敗人類頂級(jí)撲克玩家的算法。在與頂級(jí)專業(yè)玩家進(jìn)行“單挑無限下注德州撲克”(一種極具挑戰(zhàn)性的牌類游戲)比賽時(shí),這臺(tái)叫做Libratus的機(jī)器贏了將近200萬美元籌碼。人類玩家稱,被一臺(tái)機(jī)器詐唬住真叫人“灰心喪氣”。同樣,Libratus也是通過發(fā)覺自身弱點(diǎn)并加以彌補(bǔ)來提高技藝,而不是借助于人類直觀知識(shí)。

AGZ and Libratus are one-trick ponies but technologists dream of machines with broader capabilities. DeepMind, for example, declares it wants to create “algorithms that achieve superhuman performance in the most challenging domains with no human input”. Once fast, deep algorithms are unshackled from the slow, shallow disappointment of human intellect, they can begin crunching problems that our own lacklustre species has not confronted. Rather than emulating human intelligence the top tech thinkers toil daily to render it unnecessary.

AGZ和Libratus都只有“一技之長”,但技術(shù)人員還渴望開發(fā)出具備多種能力的機(jī)器。例如,DeepMind就宣布,希望研發(fā)出“能夠自主地在多個(gè)最具挑戰(zhàn)性的領(lǐng)域?qū)崿F(xiàn)超越人類能力的算法”。一旦高速、深?yuàn)W的算法完全不受遲鈍、淺薄、令人失望的人類智慧的桎梏,它們可能開始處理我們這個(gè)平庸的物種從未面對(duì)過的問題。頂尖的科技思想家并不是在模擬人類智慧打造機(jī)器,而是每日鉆研如何讓人類智慧變得無關(guān)緊要。

For that reason, we might one day look back on AGZ and Libratus as baby steps towards the Singularity, the much-debated point at which AI becomes super-intelligent, able to control its own destiny without recourse to human intervention. The most dystopian scenario is that AI becomes an existential risk.

假如未來有一天我們回首今日,我們可能會(huì)覺得AGZ和Libratus就像是人工智能邁向“奇點(diǎn)”(Singularity)過程中的蹣跚學(xué)步。奇點(diǎn)假說引起了很多爭論,它是指人工智能發(fā)展成為超級(jí)智能、能夠在不依靠人類干預(yù)的情況下控制自己命運(yùn)的階段。最具有反烏托邦色彩的情景是人工智能威脅到人類生存。

Suppose that super-intelligent machines calculate, in pursuit of their programmed goals, that the best course of action is to build even cleverer successors. A runaway iteration takes hold, racing exponentially into fantastical realms of calculation.

假設(shè)超級(jí)智能機(jī)器為了實(shí)現(xiàn)被程序設(shè)定的目標(biāo),計(jì)算出最佳做法是研制更加聰明的下一代機(jī)器。一個(gè)失控的循環(huán)由此形成,從而飛速地進(jìn)入怪誕的計(jì)算領(lǐng)域。

One day, these goal-driven paragons of productivity might also calculate, without menace, that they can best fulfil their tasks by taking humans out of the picture. As others have quipped, the most coldly logical way to beat cancer is to eliminate the organisms that develop it. Ditto for global hunger and climate change.

有一天,這些由目標(biāo)驅(qū)動(dòng)的生產(chǎn)力模范可能還會(huì)算出,在沒有威脅的情況下,它們可以通過消滅人類來最好地完成任務(wù)。有人就開過這樣的玩笑,要戰(zhàn)勝癌癥,最冷酷的合理方法就是消滅罹患癌癥的器官。全球饑荒和氣候變化也是類似情況。

These are riffs on the paper-clip thought experiment dreamt up by philosopher Nick Bostrom, now at the Future of Humanity Institute at Oxford university. If a hyper-intelligent machine, devoid of moral agency, was programmed solely to maximise the production of paper clips, it might end up commandeering all available atoms to this end. There is surely no sadder demise for humanity than being turned into office supplies. Professor Bostrom’s warning articulates the capability caution principle, a well-subscribed idea in robotics that we cannot necessarily assume the upper capabilities of AI.

這些都是“回形針?biāo)枷雽?shí)驗(yàn)”之類的想法,提出這個(gè)實(shí)驗(yàn)的哲學(xué)家尼克•博斯特羅姆(Nick Bostrom如今在牛津大學(xué)(Oxford University)人類未來研究所(Future of Humanity Institute)工作。如果不負(fù)有道德責(zé)任的超級(jí)智能機(jī)器被設(shè)定的唯一目標(biāo)是最大化地生產(chǎn)回形針,它最終可能霸占一切可用的原子來實(shí)現(xiàn)這一目標(biāo)。對(duì)人類來說,肯定沒有哪種死亡比被做成辦公用品更悲慘。博斯特羅姆教授的警告清晰地表明了“能力謹(jǐn)慎性原則”(capability caution principle),這條原則在機(jī)器人技術(shù)領(lǐng)域廣受認(rèn)同,即我們不一定能承受人工智能提高能力的后果。

It is of course pragmatic to worry about job displacement: many of us, this writer included, are paid for carrying out a limited range of tasks. We are ripe for automation. But only fools contemplate the more distant future without anxiety — when machines may out-think us in ways we do not have the capacity to imagine.

擔(dān)心工作被取代是很實(shí)際的:包括本文作者在內(nèi),我們很多人都是通過完成有限范圍的任務(wù)來獲得薪水。我們即將迎來自動(dòng)化。但只有傻瓜才會(huì)在想到更遙遠(yuǎn)的未來時(shí)無憂無慮——那時(shí)機(jī)器的智力可能會(huì)以我們現(xiàn)在想象不到的方式超越人類。

The writer is a science commentator

本文作者是科學(xué)評(píng)論員
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級(jí)聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思興安盟江南一品(文化路)英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦