Coming to DEF CON 31: Hacking AI models
DEF CON 31會(huì)議:黑進(jìn)AI模型
?
A group of prominent AI companies committed to opening their models to attack at this year's DEF CON hacking conference in Las Vegas. A group of leading artificial intelligence companies in the U.S. committed on Thursday to open their models to red-teaming at this year’s DEF CON hacking conference as part of a White House initiative to address the security risks posed by the rapidly advancing technology.
一群著名的人工智能公司承諾在今年拉斯維加斯的 DEF CON 黑客大會(huì)上會(huì)開放他們的模型以供攻擊。周四,一群美國領(lǐng)先的人工智能公司承諾在今年的 DEF CON 上開放他們的模型以接受紅隊(duì)攻擊。黑客會(huì)議是白宮倡議的一部分,旨在解決快速發(fā)展的技術(shù)帶來的安全風(fēng)險(xiǎn)。
Attendees at the premier hacking conference held annually in Las Vegas in August will be able to attack models from Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI and Stability AI in an attempt to find vulnerabilities. The event hosted at the AI Village is expected to draw thousands of security researchers.
每年 8 月在拉斯維加斯舉行的頂級黑客會(huì)議的與會(huì)者將能夠攻擊來自 Anthropic、Google、Hugging Face、Microsoft、NVIDIA、OpenAI 和 Stability AI 的模型,以試圖找到漏洞。在人工智能村舉辦的活動(dòng)預(yù)計(jì)將吸引數(shù)千名安全研究人員。
A senior administration official speaking to reporters on condition of anonymity ahead of the announcement said the red-teaming event is the first public assessment of large language models. “Red-teaming has been really helpful and very successful in cybersecurity for identifying vulnerabilities,” the official said. “That’s what we’re now working to adapt for large language models.”
一位不愿透露姓名的高級政府官員在宣布這一消息之前對記者表示,紅隊(duì)活動(dòng)是對大型語言模型的首次公開評估。這位官員表示:“紅隊(duì)在網(wǎng)絡(luò)安全識(shí)別漏洞方面確實(shí)很有幫助,而且非常成功。” “這就是我們現(xiàn)在正在努力適應(yīng)大型語言模型的方法。”