紐時賞析/美國會憂AI生化武器問世 頂尖科學家簽協議規範研究發展

超過90名生物學家及其他專精人工智慧設計新蛋白質的科學家,已簽署一項協議,尋求確保他們由人工智慧輔助的研究能在不對世界造成嚴重損害狀況下繼續發展。AI示意圖。 (聯合報系資料庫)

Dozens of Top Scientists Sign Effort to Prevent Artificial Intelligence Bioweapons

頂尖科學家籤協議 阻AI生化武器問世

Dario Amodei, CEO of high-profile artificial intelligence startup Anthropic, told Congress last year that new AI technology could soon help unskilled but malevolent people create large-scale biological attacks.

備受矚目的人工智慧新創公司Anthropic執行長阿莫戴去年告訴美國國會,新的人工智慧科技,可能很快就會幫助沒有相關技能但心懷惡意的人制造大規模生化攻擊。

Senators from both parties were alarmed, while AI researchers in industry and academia debated how serious the threat might be.

美國兩黨的參議員都感到震驚,而業界和學術界的人工智慧研究員正在爭辯威脅會有多嚴重。

Now, more than 90 biologists and other scientists who specialize in AI technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their AI-aided research will move forward without exposing the world to serious harm.

現在,超過90名生物學家及其他專精人工智慧設計新蛋白質的科學家,已簽署一項協議,尋求確保他們由人工智慧輔助的研究能在不對世界造成嚴重損害狀況下繼續發展。在生物學中,蛋白質是驅動萬物的微觀機制。

The biologists, who include Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.

這些生物學家包括諾貝爾獎得主弗朗西絲.阿諾德和美國及其他國家代表性實驗室,他們也主張這些最新科技帶來的好處遠大於壞處,包括新疫苗和藥物問世。

“As scientists engaged in this work, we believe the benefits of current AI technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.

這份協議寫到:「身爲投入這項工作的科學家,我們認爲關於蛋白質設計的當前人工智慧科技,帶來的好處遠大於可能的危害,我們願意確保未來我們的研究仍然對所有人有利。」

The agreement does not seek to suppress the development or distribution of AI technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.

這份協議不尋求抑制人工智慧發展和傳佈。相反地,這些生物學家的目標是規範製造新基因材料所需的設備用途。

This DNA manufacturing equipment is ultimately what allows for the development of bioweapons, said David Baker, director of the Institute for Protein Design at the University of Washington, who helped shepherd the agreement.

華盛頓大學蛋白質設計研究所所長貝克協助促成這份協議。他說,DNA製造設備纔是讓生化武器最終得以開發。

“Protein design is just the first step in making synthetic proteins,” he said. “You then have to actually synthesize DNA and move the design from the computer into the real world — and that is the appropriate place to regulate.”

他表示:「蛋白質設計只是製造合成蛋白質的第一步。接着你必須實際合成蛋白質並將設計從電腦移到現實世界,這就是合適的規範場合。」

The biologists called for the development of security measures that would prevent DNA manufacturing equipment from being used with harmful materials — though it is unclear how those measures would work. They also called for safety and security reviews of new AI models before releasing them.

生物學家呼籲發展安全措施,防止DNA製造設備和有害材料一起使用,儘管目前還不清楚那些措施如何運作。他們也呼籲在新的人工智慧模型發表前進行審查。

They did not argue that the technologies should be bottled up.

他們沒有主張這些科技該被隱藏起來。

“These technologies should not be held only by a small number of people or organizations,” said Rama Ranganathan, a professor of biochemistry and molecular biology at the University of Chicago, who signed the agreement.

參與簽署該協議的芝加哥大學生物化學與分子生物學教授蘭加納坦表示:「這些科技不應被少數人或組織掌握。」

文/Cade Metz 譯/羅方妤