Hype is growing out of the leaders of the main EI companies that “strong” computer intelligence will force forcefully reflecting people, but many researchers in the area see the claims as marketing.
Fuel assumptions that are often called “artificial intelligence” (artificial intelligence “(AGI) – fuel assumptions from the machine to the destruction of the machine from the car, which comes out of the car from the car.
“Systems starting to point to AGI,” Openai head Sam Altman wrote in a blog post last month. Anthropikin Dario Amodani said that his spotious could come in early 2026. “
Such forecasts are poured in hundreds of billions of billions of calculation and energy supply to manage it.
Others are treated with more doubts.
Methane AI Scholeam Yann Lecun, last month to AFP “We do not intend to get to the human level with LLMS libra” – great language models behind current systems such as Chatgpt or Claude.
Lecun’s appearance is supported by most scientists in the field.
For the development of the U.S. Opportunity for the development of Artificial Intelligence (AAAI), the last survey has been likely to produce “Migrating Scale” AG.
‘Cin came out of the glass’
Some scientists believe that many companies are strategy to focus on the claims of many companies with warnings of mankind for humanity.
Enterprises “They made great investments and they have to pay,” he said.
“They just say:” It’s so dangerous, “I can only work, I’m just afraid of myself, but I will sacrifice myself from your name.” But then you will sacrifice myself. “
Among academic researchers, Skepticism, Nobel winning physicist Geoffrey Hinton or 2018 Turing Larger Yoshua Bengio warns about the dangers of a powerful AI.
“It’s a little goethe’s ‘student’s student’, something you can’t control suddenly suddenly
Similar, a more recent test test is “PaperClip Maximiser”.
This imagined AI, for the purpose of making paper compasses, he said that he was built in the universe in the universe or the people in paper, and he was escaped from the people he could turn it off.
Therefore, although not “bad”, the maximum, the human goals and values ​​of the AI, the “alignment”, which are human goals and values, will be fatal.
Kersting said, “can understand” – these fears will “understand”, “if it will take so much time to make it so much, if it will take so much time, if it will draw so much if it will take so much if it will take so much,” so far “.
The existing EU is more concerned, for example, such as discrimination, such as interaction with people.
‘The biggest thing’
The worldview between scientists and the EU industry leaders can reflect the relationship between the Career Road, the Director of the AI, the director of the AI, Heigeatheigh offered a futures and responsibility program at the British University of Cambridge.
“If you are very optimistic about how strong the modern methods are, we probably probably allow you to work and work in one of the companies trying to work in one of many resources.”
Although Altman and Amodei can be “very optimistic about Agi,” We need to think about it and take it seriously “, because it would be the biggest thing that will happen.”
“If there was something else … a chance that the foreigners would come up to 2030 or another giant pandemic or something, we would give it some time.”
The problem may be lied to politicians and the public.
Super-EU “speech” immediately creates such immune reaction … It sounds like science fiction, “he said.
This story was first displayed on Fortune.com
Source link