MacMusic  |  PcMusic  |  440 Software  |  440 Forums  |  440TV  |  Zicos
group
Search

AI Pioneers Call For Protections Against 'Catastrophic Risks'

Tuesday September 17, 2024. 05:30 AM , from Slashdot
An anonymous reader quotes a report from the New York Times: Scientists who helped pioneer artificial intelligence are warning that countries must create a global system of oversight to check the potentially grave risks posed by the fast-developing technology. The release of ChatGPT and a string of similar services that can create text and images on command have shown how A.I. is advancing in powerful ways. The race to commercialize the technology has quickly brought it from the fringes of science to smartphones, cars and classrooms, and governments from Washington to Beijing have been forced to figure out how to regulate and harness it. In a statement on Monday, a group of influential A.I. scientists raised concerns that the technology they helped build could cause serious harm. They warned that A.I. technology could, within a matter of years, overtake the capabilities of its makers and that 'loss of human control or malicious use of these A.I. systems could lead to catastrophic outcomes for all of humanity.'

If A.I. systems anywhere in the world were to develop these abilities today, there is no plan for how to rein them in, said Gillian Hadfield, a legal scholar and professor of computer science and government at Johns Hopkins University. 'If we had some sort of catastrophe six months from now, if we do detect there are models that are starting to autonomously self-improve, who are you going to call?' Dr. Hadfield said. On Sept. 5-8, Dr. Hadfield joined scientists from around the world in Venice to talk about such a plan. It was the third meeting of the International Dialogues on A.I. Safety, organized by the Safe AI Forum, a project of a nonprofit research group in the United States called Far.AI. Governments need to know what is going on at the research labs and companies working on A.I. systems in their countries, the group said in its statement. And they need a way to communicate about potential risks that does not require companies or researchers to share proprietary information with competitors. The group proposed that countries set up A.I. safety authorities to register the A.I. systems within their borders. Those authorities would then work together to agree on a set of red lines and warning signs, such as if an A.I. system could copy itself or intentionally deceive its creators. This would all be coordinated by an international body.

Among the signatories was Yoshua Bengio, whose work is so often cited that he is called one of the godfathers of the field. There was Andrew Yao, whose course at Tsinghua University in Beijing has minted the founders of many of China's top tech companies. Geoffrey Hinton, a pioneering scientist who spent a decade at Google, participated remotely. All three are winners of the Turing Award, the equivalent of the Nobel Prize for computing. The group also included scientists from several of China's leading A.I. research institutions, some of which are state-funded and advise the government. A few former government officials joined, including Fu Ying, who had been a Chinese foreign ministry official and diplomat, and Mary Robinson, the former president of Ireland. Earlier this year, the group met in Beijing, where they briefed senior Chinese government officials on their discussion.

Read more of this story at Slashdot.
https://news.slashdot.org/story/24/09/16/2226243/ai-pioneers-call-for-protections-against-catastroph...

Related News

News copyright owned by their original publishers | Copyright © 2004 - 2024 Zicos / 440Network
Current Date
Dec, Wed 18 - 10:57 CET