英語閱讀 學英語,練聽力,上聽力課堂! 注冊 登錄
> 輕松閱讀 > 雙語閱讀 >  內(nèi)容

FT社評:防范人工智能的不利一面

所屬教程:雙語閱讀

瀏覽:

2018年03月18日

手機版
掃描二維碼方便學習和分享
The latest report on the potentially malicious uses of artificial intelligence reads like a pitch for the next series of the dystopian TV show Black Mirror.

有關(guān)人工智能潛在惡意用途的最新報告,讀上去就像是反烏托邦電視劇《黑鏡》(Black Mirror)下一季的廣告。

Drones using facial recognition technology to hunt down and kill victims. Information being manipulated to distort the social media feeds of targeted individuals. Cleaning robots being hacked to bomb VIPs. The potentially harmful uses of AI are as vast as the human imagination.

無人機利用面部識別技術(shù)搜尋并殺害受害者。信息正受到操縱,為的是篡改目標個人的社交媒體消息。清潔機器人被黑客入侵,來轟炸非常重要的人物。人類的想象力有多廣,人工智能的潛在有害用途就有多廣。

One of the big questions of our age is: how can we maximise the undoubted benefits of AI while limiting its downsides? It is a tough challenge. All technologies are dualistic, particularly so with AI given it can significantly increase the scale and potency of malicious acts and lower their costs.

我們這個時代的一個重要問題是:我們?nèi)绾螌⑷斯ぶ悄芪阌怪靡傻暮锰幾畲蠡瑫r限制它的壞處呢?這是一項艱巨的挑戰(zhàn)。所有的技術(shù)都具有兩面性,人工智能尤其如此,因為它能夠極大地擴大惡意行為的規(guī)模和危害,同時降低成本。

The report, written by 26 researchers from several organisations including OpenAI, Oxford and Cambridge universities, and the Electronic Frontier Foundation, performs a valuable, if scary, service in flagging the threats from the abuse of powerful technology by rogue states, criminals and terrorists. Where it is less compelling is coming up with possible solutions.

這份報告由來自O(shè)penAI、牛津大學(Oxford)、劍橋大學(Cambridge)以及Electronic Frontier Foundation等幾家機構(gòu)的26名研究員撰寫。盡管可怕,但報告頗有意義地警示了流氓政府、犯罪分子和恐怖分子濫用強大技術(shù)而帶來的威脅。報告在提出可能的解決方案方面則不那么令人信服。

Much of the public concern about AI focuses on the threat of an emergent superintelligence and the mass extinction of our species. There is no doubt that the issue of how to “control” artificial general intelligence, as it is known, is a fascinating and worthwhile debate. But in the words of one AI expert, it is probably “a second half of the 21st century problem”.

公眾對于人工智能的擔憂在很大程度上聚焦于超級智能出現(xiàn)的威脅以及人類的大規(guī)模滅絕。毫無疑問,如何“控制”人工通用智能的問題是一場有趣且有意義的辯論。但用一位人工智能專家的話來說,這可能是“21世紀后半葉的問題”。

The latest report highlights how we should already be worrying today about the abuse of relatively narrow AI. Human evil, incompetence and poor design will remain a bigger threat for the foreseeable future than some omnipotent and omniscient Terminator-style Skynet.

這份最新報告強調(diào)了我們現(xiàn)在就應該如何擔心相對狹義的人工智能的濫用。在可預見的將來,與某個無所不能、無所不知的《終結(jié)者》(Terminator)式的天網(wǎng)(Skynet)相比,人類罪惡、無能和糟糕設(shè)計是更嚴重的威脅。

AI academics have led a commendable campaign to highlight the dangers of so-called lethal autonomous weapons systems. The United Nations is now trying to turn that initiative into workable international protocols.

人工智能學者領(lǐng)導了一場值得稱贊的運動,強調(diào)所謂的致命自動武器系統(tǒng)的危險。聯(lián)合國(UN)現(xiàn)在正努力將這項倡議轉(zhuǎn)化為可行的全球協(xié)議。

Some interested philanthropists, including Elon Musk and Sam Altman, have also sunk money into research institutes focusing on AI safety, including one that co-wrote the report. Normally, researchers who call for more money to be spent on research should be treated with some scepticism. But there are estimated to be just 100 researchers in the western world grappling with the issue. That seems far too few, given the scale of the challenge.

一些感興趣的慈善家,包括埃隆•馬斯克(Elon Musk)和山姆•奧爾特曼(Sam Altman),已將資金投入專注人工智能安全的研究機構(gòu),包括一家聯(lián)合撰寫這份報告的機構(gòu)。通常,對于呼吁擴大研究經(jīng)費的研究人員,應該報以一定懷疑。但據(jù)估計,在西方世界,只有100名研究人員在應對這個問題。鑒于這項挑戰(zhàn)的規(guī)模,這個數(shù)字似乎太小。

Governments need to raise their understanding in this area. In the US, the creation of a federal robotics commission to develop relevant governmental expertise would be a good idea. The British government is sensibly expanding the remit of the Alan Turing Institute to encompass AI.

各國政府需要提升他們在這個領(lǐng)域的認識。在美國,創(chuàng)建一個聯(lián)邦機器人委員會發(fā)展相關(guān)政府專業(yè)技能將是個好辦法。英國政府正明智地擴大圖靈研究所(Alan Turing Institute)的職權(quán)范圍,將人工智能囊括在內(nèi)。

Some tech companies have already engaged the public on ethical issues concerning AI, and the rest should be encouraged to do so. Arguably, they should also be held liable for the misuse of their AI-enabled products in the same way that pharmaceutical firms are responsible for the harmful side-effects of their drugs.

一些科技公司已讓公眾參與到與人工智能相關(guān)的道德問題,其他公司也應被鼓勵這么做??梢哉f,它們還應對它們的人工智能產(chǎn)品的不當使用負責,就像制藥企業(yè)對藥物的有害副作用負責一樣。

Companies should be deterred from rushing AI-enabled products to market before they have been adequately tested. Just as the potential flaws of cyber security systems are sometimes explored by co-operative hackers, so AI services should be stress-tested by other expert users before their release.

應防止公司在人工智能產(chǎn)品接受充分測試之前匆忙將其推向市場。就像網(wǎng)絡安全系統(tǒng)有時會用合作黑客查探潛在漏洞一樣,人工智能服務應在發(fā)布之前由其他專家使用者實施壓力測試。

Ultimately, we should be realistic that only so much can ever be done to limit the abuse of AI. Rogue regimes will inevitably use it for bad ends. We cannot uninvent scientific discovery. But we should, at least, do everything possible to restrain its most immediate and obvious downsides.

最后,我們應現(xiàn)實一點,要限制人工智能的濫用,我們能做的只有這么多。流氓政權(quán)將不可避免地將其用于罪惡的目的。我們不能消滅科學發(fā)現(xiàn)。但至少,我們應盡我們所能限制其最直接、最明顯的缺點。
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思南陽市書香水岸英語學習交流群

網(wǎng)站推薦

英語翻譯英語應急口語8000句聽歌學英語英語學習方法

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦