英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊(cè) 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

如何讓人工智能造福人類?

所屬教程:科學(xué)前沿

瀏覽:

2017年02月23日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
The technology industry is facing up to the world-shaking ramifications of artificial intelligence. There is now a recognition that AI will disrupt how societies operate, from education and employment to how data will be collected about people.

科技行業(yè)正準(zhǔn)備迎接人工智能帶來的震撼世界的影響。如今人們意識(shí)到,從教育、就業(yè),到如何收集人們的數(shù)據(jù),人工智能將擾亂社會(huì)運(yùn)轉(zhuǎn)的方式。

Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.

機(jī)器學(xué)習(xí)是一種高級(jí)形態(tài)的模式識(shí)別,能夠讓機(jī)器通過分析大量數(shù)據(jù)來做出判斷。這有望大大輔助人類思維。但這種與日俱增的能力引發(fā)了近乎“科學(xué)怪人”(Frankenstein)式的擔(dān)憂:開發(fā)人員能否控制他們創(chuàng)造出的機(jī)器?

Failures of autonomous systems — like the death last yearof a US motorist in a partially self-driving car from Tesla Motors — have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. “That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here,” he says.

加州大學(xué)伯克利分校(University of California, Berkeley)計(jì)算機(jī)科學(xué)教授、人工智能專家斯圖亞特•拉塞爾(Stuart Russell)表示,自動(dòng)系統(tǒng)的失誤(就像去年駕駛一輛特斯拉汽車(Tesla Motors)部分自動(dòng)駕駛汽車的美國駕車者死亡那樣)促使人們關(guān)注安全。他表示:“這種事件可能會(huì)嚴(yán)重阻礙行業(yè)的發(fā)展,因此這里有著非常直接的經(jīng)濟(jì)自身利益。”

Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UK’s vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.

除了移民和全球化,對(duì)人工智能驅(qū)動(dòng)的自動(dòng)化的擔(dān)憂,正引發(fā)公眾對(duì)于不平等和就業(yè)安全的擔(dān)憂。唐納德•特朗普(Donald Trump)當(dāng)選美國總統(tǒng)以及英國投票退出歐盟(EU)在一定程度上就是受到這類擔(dān)憂的推動(dòng)。盡管一些政治人士聲稱,保護(hù)主義政策將有利于勞動(dòng)者,但很多行業(yè)專家表示,多數(shù)就業(yè)損失是由科技變革(主要是自動(dòng)化)造成的。

Global elites — those with high income and educational levels, who live in capital cities — are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.

英國《金融時(shí)報(bào)》/高通(Qualcomm)聯(lián)合開展的Essential Future調(diào)查發(fā)現(xiàn),全球精英(那些收入和受教育程度高、生活在首都城市的人)對(duì)于創(chuàng)新要比普通大眾熱情得多。除非彌合這種差距,否則它將繼續(xù)引發(fā)政治摩擦。

Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: “Tech companies must accept responsibility for what they’re creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.

美國企業(yè)家、撰寫道德和科技文章的學(xué)者維微克•瓦德瓦(Vivek Wadhwa)認(rèn)為,新的自動(dòng)化浪潮具有地緣政治上的潛在影響:“科技公司必須對(duì)他們所創(chuàng)造出的東西承擔(dān)責(zé)任,并與用戶和政策制定者合作,緩解風(fēng)險(xiǎn)和負(fù)面影響。他們必須讓員工花時(shí)間思考哪里可能出錯(cuò),就像他們花時(shí)間宣傳產(chǎn)品那樣。”

The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees’ work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.

人工智能行業(yè)正在準(zhǔn)備應(yīng)對(duì)反彈。人工智能和機(jī)器人領(lǐng)域的進(jìn)步,已經(jīng)把自動(dòng)化引入白領(lǐng)工作領(lǐng)域,例如法律文書和分析財(cái)務(wù)數(shù)據(jù)。麥肯錫(McKinsey)的一項(xiàng)研究稱,在美國員工的工作時(shí)間中,大約有45%用在可以借助現(xiàn)有技術(shù)實(shí)現(xiàn)自動(dòng)化的任務(wù)上。

Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: “We’ve seen papers . . . that address the technical problem of safety.”

為了確保人工智能有利于人類,已經(jīng)建立了一些行業(yè)和學(xué)術(shù)計(jì)劃。其中包括由IBM等公司創(chuàng)建的人工智能造福人類和社會(huì)合作組織(Partnership on AI to Benefit People and Society),以及涉及哈佛大學(xué)(Harvard)和麻省理工學(xué)院(MIT)的一項(xiàng)2700萬美元計(jì)劃。得到埃隆•馬斯克(Elon Musk)和谷歌(Google)支持的OpenAI等組織已取得進(jìn)展,拉塞爾教授表示:“我們看到了一些論文……它們針對(duì)安全性的技術(shù)問題。”

There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his company’s developers to combat computer malware. His “trustworthy computing” initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.

這方面有一些過去應(yīng)對(duì)新技術(shù)影響努力的回聲。微軟(Microsoft)首席執(zhí)行官薩蒂亞•納德拉(Satya Nadella)將其與15年前相比,當(dāng)時(shí)比爾•蓋茨(Bill Gates)動(dòng)員公司的開發(fā)人員抗擊電腦惡意程序。他發(fā)起的“可信計(jì)算”倡議是一個(gè)分水嶺。納德拉在接受英國《金融時(shí)報(bào)》采訪時(shí)表示,他希望采取類似的舉措以確保人工智能造福于人類。

AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. “Many of our data sets have been collected . . . with assumptions we may not deeply understand, and we don’t want our machine-learned applications . . . to be amplifying cultural biases,” he said.

然而,人工智能帶來了一些棘手的問題。機(jī)器學(xué)習(xí)系統(tǒng)從大量數(shù)據(jù)中得出見解。微軟高管埃里克•霍維茨(Eric Horvitz)去年底在美國參議院聽證會(huì)上表示,這些數(shù)據(jù)集可能本身就存在問題。他表示:“我們的很多數(shù)據(jù)集是……在假設(shè)我們可能并不深入理解的情況下收集的,我們不希望讓我們的機(jī)器學(xué)習(xí)應(yīng)用……放大文化偏見。”

Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.

新聞機(jī)構(gòu)ProPublica去年進(jìn)行的一項(xiàng)調(diào)查發(fā)現(xiàn),美國司法機(jī)構(gòu)用來確定刑事被告人是否有可能再次犯罪的算法存在種族偏見。再次犯罪風(fēng)險(xiǎn)較低的黑人被告比白人被告更容易被標(biāo)記為高風(fēng)險(xiǎn)。

Greater transparency is one way forward, for example making it clear what information AI systems have used. But the “thought processes” of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. “We need to understand how to justify [their] decisions and how the thinking is done.”

提高透明度是一條出路,比如明確人工智能系統(tǒng)使用了哪些信息。但深度學(xué)習(xí)系統(tǒng)的“思維過程”不容易加以審查?;艟S茨表示,人類很難理解這種系統(tǒng)。“我們需要理解如何證明(它們的)決策合理,以及這種思考是如何完成的。”

As AI comes to influence more government and business decisions, the ramifications will be widespread. “How do we make sure the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society?” asks Joi Ito, director of MIT’s Media Lab.

隨著人工智能影響更多政府和企業(yè)決策,影響將是廣泛的。“我們?nèi)绾未_保我們‘培訓(xùn)’的機(jī)器不會(huì)固化和放大困擾社會(huì)的人類偏見?”麻省理工學(xué)院媒體實(shí)驗(yàn)室主任伊藤穰一(Joi Ito)問道。

Executives like Mr Nadella believe a mixture of government oversight — including, by implication, the regulation of algorithms — and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.

納德拉等高管認(rèn)為,答案將是結(jié)合政府監(jiān)督(言外之意,這包括對(duì)算法的監(jiān)管)和行業(yè)行動(dòng)。他計(jì)劃在微軟成立一個(gè)道德委員會(huì),以處理人工智能帶來的任何棘手問題。

He says: “I want . . . an ethics board that says, ‘If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact . . . that it doesn’t come with some bias that’s built in.’”

他說:“我希望有……一個(gè)道德委員會(huì),它會(huì)這樣說,‘如果我們要在任何作出預(yù)測、可能具有實(shí)際社會(huì)影響的場合使用人工智能……那么它不帶有內(nèi)置的一些偏見’。”

Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.

確保人工智能在不會(huì)帶來一些意想不到的后果的情況下造福人類,是很困難的。拉塞爾教授說,人類社會(huì)無法界定自身想要什么,因此通過編程讓機(jī)器為最多數(shù)量的人謀求最大幸福是存在問題的。

This is AI’s so-called “control problem”: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. “The machine has to allow for uncertainty about what it is the human really wants,” says Prof Russell.

這就是人工智能所謂的“控制問題”:智能機(jī)器將一心追逐武斷的目標(biāo),甚至當(dāng)這些目標(biāo)并不可取的時(shí)候也是如此。“機(jī)器必須考慮到人類真正想要的東西具有不確定性,”拉塞爾教授說。

Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this year’s World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.

然而,道德委員會(huì)無法平息人們對(duì)人工智能奪走工作的擔(dān)憂。在今年的達(dá)沃斯世界經(jīng)濟(jì)論壇(World Economic Forum)上,對(duì)反彈的擔(dān)憂很明顯,高管們對(duì)于如何采用人工智能并作出解釋十分焦慮。普遍的回應(yīng)是,聲稱機(jī)器在可能取代一些工作的同時(shí),也將讓許多工作更能帶來成就感。

The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. “Whenever someone cuts cost, that means, hopefully, a surplus is being created,” says Mr Nadella. “You can always tax surplus — you can always make sure that surplus gets distributed differently.”

對(duì)科技公司和它們的客戶而言,生產(chǎn)率提高帶來的利益可能是巨大的。如何分配這些利益將成為有關(guān)人工智能的辯論的一部分。“每當(dāng)有人削減了成本,那就意味著有望創(chuàng)造出一些盈余,”納德拉說,“你總可以對(duì)盈余課稅——你總可以確保以不同的方式分配這些盈余。”

Additional reporting by Adam Jezard

亞當(dāng)•耶扎德(Adam Jezard)補(bǔ)充報(bào)道
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級(jí)聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思深圳市雍怡閣英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦