Key sentence
The high-profile event, hosted by the Rishi Sunakled UK government, caps a year of intense escalation in global discussions about AI safety, following the launch of ChatGPT nearly a year ago.
这一备受瞩目的活动由理希•苏纳克尔(Rishi Sunakled)领导的英国政府主办,标志着自近一年前推出ChatGPT以来,全球关于人工智能安全的讨论在过去一年中出现了激烈升级。
Its viral appeal breathed life into a formerly-niche school of thought that AI could, sooner or later, pose an existential risk to humanity, and prompted policymakers around the world to weigh whether, and how, to regulate the technology.
它的病毒式传播给一个曾经被认为是小众的思想流派注入了活力,这个学派认为人工智能迟早会对人类的生存构成威胁,并促使世界各地的政策制定者权衡是否要对这项技术进行监管。
Those discussions have been taking place amid warnings not only that today’s AI tools already present manifold dangers - especially to marginalized communities - but also that the next generation of systems could be 10 or 100 times more powerful, not to mention more dangerous.
这些讨论是在警告中进行的,不仅今天的人工智能工具已经带来了多种危险——尤其是对边缘化社区——而且下一代系统可能比现在强大10倍或100倍,更不用说更危险了
Michelle Donelan, the UK’s science and technology minister, opened the summit on Wednesday speaking of her hope that delegates gathered for the summit would contribute to an achievement of similar magnitude, “pushing the boundaries of what is actually possible.”
英国科技大臣米歇尔•多尼兰(Michelle Donelan)周三在峰会开幕式上表示,她希望出席峰会的代表们将为取得类似规模的成就做出贡献,“突破实际可能的界限”
He also announced that Yoshua Bengio, a Turing Award - winning computer scientist, had agreed to chair a body that would seek to establish, in a report, the scientific consensus on risks and capabilities of frontier AI systems.
他还宣布,图灵奖得主、计算机科学家约书亚·本吉奥(Yoshua Bengio)已同意担任一个机构的主席,该机构将在一份报告中寻求就前沿人工智能系统的风险和能力达成科学共识。
Despite the limited progress, delegates at the event welcomed the high-level discussions as a crucial first step toward international collaboration on regulating the technology - acknowledging that while there were many areas of consensus, some key differences remain.
尽管取得了有限的进展,但与会代表对高层讨论表示欢迎,认为这是朝着监管该技术的国际合作迈出的关键的第一步——他们承认,尽管在许多领域达成了共识,但仍存在一些关键分歧。
The declaration said AI poses both short-term and longer-term risks, affirmed the responsibility of the creators of powerful AI systems to ensure they are safe, and committed to international collaboration on identifying and mitigating the risks.
宣言称,人工智能带来了短期和长期风险,肯定了强大的人工智能系统的创造者有责任确保它们的安全,并致力于在识别和减轻风险方面进行国际合作。
The UK government, as organizer of the Summit, has walked a fine line between communicating that it is serious about AI risks on one hand, while telegraphing to tech companies that it is open for business on the other.
作为此次峰会的组织者,英国政府一直在微妙地走钢丝,一方面传达出它对人工智能风险的严肃态度,另一方面又向科技公司发出信号,表明它对商业开放。
“For me, the biggest risk actually that we face, is the risk of missing out on all these incredible opportunities that AI can truly present,” Donelan told tech industry luminaries at a reception at Google DeepMind’s headquarters on the eve of the Summit. “If we actually terrify people too much, or if we shy away because we don’t grip these risks, then we won’t see the adoption in our NHS, we won’t see the adoption in our transport network, we won’t able to utilize AI to tackle climate change or to support developing nations to tackle issues like food inequality. And that would be the biggest tragedy that we could imagine.”
在峰会前夕,Donelan在谷歌DeepMind总部的一个招待会上告诉科技行业名人:”对我来说,我们实际上面临的最大风险是错过人工智能真正可以提供的所有这些令人难以置信的机会的风险。””如果我们真的太过恐惧人们,或者如果我们因为没有抓住这些风险而回避,那么我们就不会看到我们的NHS采用人工智能,我们就不会看到我们的运输网络采用人工智能,我们就无法利用人工智能来应对气候变化,或支持发展中国家解决粮食不平等等问题。这将是我们能想象到的最大悲剧。”
The US on the other hand, made several announcements this week that threatened to overshadow the UK’s claim to global leadership on AI safety. At a speech in London on Wednesday, Vice President Kamala Harris announced a sweeping set of US actions, including the establishment of an American AI Safety Institute. Harris said the body would create guidelines for risk evaluations of AI systems, and develop guidance for regulators on issues like watermarking AI-generated material and combating algorithmic discrimination. Harris’s announcement followed an executive order signed by President Joe Biden on Monday, requiring AI companies notify the federal government when training potentially dangerous models, and share the results of safety tests before making them public.
另一方面,美国本周发表了几项声明,可能会给英国在人工智能安全方面的全球领导地位蒙上阴影。周三,美国副总统卡马拉·哈里斯(Kamala Harris)在伦敦发表演讲时宣布了美国的一系列行动,包括成立美国人工智能安全研究所(American AI Safety Institute)。哈里斯表示,该机构将为人工智能系统的风险评估制定指导方针,并就水印人工智能生成的材料和打击算法歧视等问题为监管机构制定指导方针。哈里斯宣布这一消息之前,乔·拜登(Joe Biden)总统周一签署了一项行政命令,要求人工智能公司在培训潜在危险的模型时通知联邦政府,并在公开之前分享安全测试的结果。
The British foreign secretary, James Cleverley, played down suggestions on Thursday that US had overshadowed the UK with its announcements. “This isn’t about hoarding, this is about sharing,” he told TIME. “This is something we want everyone involved in. It’s not exclusive, it’s inclusive.”
英国外交大臣詹姆斯•克莱维利(James Cleverley)周四淡化了有关美国的声明给英国蒙上阴影的说法。“这不是囤积,这是分享,”他告诉《时代》杂志。”这是我们希望每个人都参与的事情。它不是排他性的,而是包容性的。”
Connor Leahy, CEO of the AI safety company Conjecture, who has been particularly vocal about what he says are serious existential threats posed by AI, told TIME on Wednesday he had been impressed by the caliber of discussions and the near-uniform agreement that collaboration to address risks was necessary. “Overall, I think the UK has done something really phenomenal here,” he said, praising the number of high-level attendees from both government and industry. “This is not the place where policy get made in practice, this is the kind of place where the groundwork gets laid.”
人工智能安全公司猜想(Conjecture)首席执行官康纳·莱希(Connor Leahy)周三告诉《时代》(TIME),他对讨论的水准和几乎一致的协议印象深刻,即合作应对风险是必要的。莱希一直对他所说的人工智能构成的严重生存威胁直言不讳。”总体而言,我认为英国在这里做了一些真正了不起的事情,”他说,并赞扬了来自政府和行业的高级别与会者的数量。”这不是政策在实践中制定的地方,这是那种奠定基础的地方。”
Select members of civil society were invited to attend closed-door sessions with policymakers and technologists, although some of them chafed at what they said was insufficient representation. “If this is truly a global conversation, why is it mostly US and UK civil society?” said Vidushi Marda, a delegate at the event from the non-profit REAL ML, and who is based in Bangalore, India. “Most of the consequential decisions are pretty opaque to us, even though we are in the room.”
民间社会的一些成员被邀请参加与政策制定者和技术专家举行的闭门会议,尽管他们中的一些人对他们所说的代表性不足感到恼火。“如果这真的是一场全球对话,为什么主要是美国和英国的公民社会?”来自非营利组织REAL ML、常驻印度班加罗尔的与会代表维多希·马尔达(Vidushi Marda)说。”大多数相关决定对我们来说都相当不透明,尽管我们在房间里。”
While the Summit may have succeeded to some extent at bridging the divide between researchers warning of near- and long-term risks, a separate difference in opinion – between open-source and closed-source approaches to AI research – was evident among many of the industry attendees. Advocates of more restricted AI research say that the dangers of advanced AI are too significant for the source code of powerful models to be freely distributed. The open-source community disagree, saying that profit-driven companies monopolizing AI research is likely to lead to bad outcomes, and argues the open-sourcing models can accelerate safety research.
虽然峰会可能在一定程度上成功地弥合了研究人员对近期和长期风险的警告之间的分歧,但在许多行业与会者中,人工智能研究的开源和闭源方法之间的观点差异是显而易见的。更严格的人工智能研究的倡导者说,高级人工智能的危险太大了,强大模型的源代码无法自由分发。开源社区不同意这种说法,认为垄断人工智能研究的利润驱动公司很可能会导致糟糕的结果,并认为开源模式可以加快安全研究。
The symbolism of the gathering at Bletchley Park, home of a wartime effort where great minds came together to safeguard life and liberty in the face of an existential threat, was not lost on many attendees. But if the Summit fails to deliver the desired results, an alternative historical comparison might prove a better metaphor. After the war, the celebrated code-breaking agency based in Bletchley evolved into GCHQ - the UK’s intelligence agency that, in partnership with the US National Security Agency, conducted indiscriminate global mass surveillance programs - using technology not to safeguard citizens, but to systematically violate their rights.
在布莱切利公园(Bletchley Park)举行的这次集会,对许多与会者来说,其象征意义并没有被忽视。布莱切利公园是战时努力的发源地,在那里,伟大的思想聚集在一起,面对生存威胁,捍卫生命和自由。但如果峰会未能取得预期结果,另一种历史比较可能会被证明是一个更好的比喻。战后,总部位于布莱切利的著名密码破译机构演变为英国情报机构GCHQ,该机构与美国国家安全局(National Security agency)合作,不分皂白地进行全球大规模监控项目,利用技术不是为了保护公民,而是系统性地侵犯他们的权利。
“The mythology of Bletchley has been instrumented by successive governments to justify surveillance and increasing technological control, implying that these efforts spring from the same source as the UK’s anti-fascist technological endeavors during the second world war,” Meredith Whittaker, president of Signal, told TIME. “So it’s not surprising that the current government’s attempt to get close to the powerful US-based AI industry would leverage and stretch this same mythology, hoping that the glow of the past can obscure the reality of the present.”
Signal公司总裁梅雷迪思·惠特克在接受《时代》杂志采访时表示:“布莱切利的神话被历届政府用来为监视和加强技术控制辩护,这意味着这些努力与二战期间英国反法西斯技术努力的根源是一样的。””因此,现任政府试图接近总部位于美国的强大人工智能行业,将利用和延伸同样的神话,希望过去的光芒可以掩盖现在的现实,这并不奇怪。”
重点词汇
核心术语
英文词汇 | 中文释义 |
---|---|
escalation | 升级/加剧 |
niche | 小众的 |
existential | 存在主义的/关乎存亡的 |
manifold | 多样的/多方面的 |
marginalize | 边缘化 |
magnitude | 规模/重要性 |
crucial | 关键的 |
pose | 构成(风险) |
mitigating | 减轻/缓解 |
walked a fine line | 走钢丝(平衡策略) |
incredible | 难以置信的/惊人的 |
luminary | 杰出人物 |
shy away | 回避/退缩 |
grip | 抓住/掌控 |
utilize | 利用 |
tragedy | 悲剧 |
threatened | 威胁/可能破坏 |
overshadow | 使相形见绌/遮蔽 |
guidance | 指导方针 |
regulator | 监管机构 |
algorithmic discrimination | 算法歧视 |
play down | 淡化/轻描淡写 |
hoard | 囤积/垄断 |
exclusive | 排他的 |
inclusive | 包容的 |
particularly vocal | 直言不讳的 |
phenomenal | 非凡的/显著的 |
chafed | 感到恼火 |
insufficient | 不足的 |
consequential | 重要的/有重大影响的 |
opaque | 不透明的/难以理解的 |
bridging | 弥合/连接 |
separate | 不同的/独立的 |
advocate | 倡导者 |
monopolize | 垄断 |
safeguard | 保护/捍卫 |
metaphor | 比喻/隐喻 |
celebrated | 著名的/备受赞誉的 |
indiscriminate | 不分皂白的/无差别的 |
surveillance | 监控 |
systematically | 系统性地 |
mythology | 神话/虚构叙事 |
instrumented | 被利用/被工具化 |
successive | 连续的/历届的 |
justify | 合理化/证明正当性 |
spring from | 源自/发端于 |
obscure | 掩盖/模糊 |
重点短语解析
-
walk a fine line
- 语境:描述英国在AI风险管控与商业开放间的平衡策略
- 记忆点:类似中文"走钢丝"的意象
-
algorithmic discrimination
- 应用场景:AI招聘工具可能对少数族裔产生不公正结果
-
indiscriminate surveillance
- 关联事件:斯诺登曝光的"棱镜计划"
-
spring from
- 句式拓展:The controversy springs from differing views on…(争议源于…的不同观点)