The Openai co -founder wanted to build the Doomsday bunker to protect scientists from the “Rapure” company: Book

The co -founder of Chatgpt Maker Openai proposed to build a doomsday bunker that would house the company’s leading researchers in the event of a “abduction” triggered by the release of a new form of artificial intelligence that could overcome humans’ cognitive skills, according to a new book.

Ilya Sutskever, the man who proved to be the brain behind Chatgpt, called a meeting with key scientists in the Openai in the summer of 2023 during which he said, “Once we all enter the bunker …”

A confused researcher interrupted him. “Sorry,” the researcher asked, “The Bunker?”

Ilya Sutskever, the co -founder of Openai, told the company’s main scientists that they should enter a bunker after the launch of artificial general intelligence. Atmosphere

“We will definitely build a bunker before launching AGI,” said Sutskever, according to an assistant.

The plan, he explained, would be to protect the basic Openai scientists from what he predicted to be geopolitical chaos or violent competition among the world powers once an artificial intelligence is released, which exceeds human capacities.

“Of course,” he added, “it will be optional if you want to enter the bunker.”

Karen Hao, author of the next book “Empire of Ai: Dreams and Nightmares in Sam Altman’s Openai, first reported the exchange” Empire of Ai: Dreams and Nightmares in Sam Altman “.

An adapted essay from the book was published by the Atlantic.

Sutskever’s Bunker’s comment was not a single one. Two other sources told Hao that Sutskever had regularly referred to the bunker in internal discussions.

Sutskever proposed to build a doomsday bunker to house the company’s leading researchers in the event of “Rapture” triggered by AGI’s launch. The picture above is a Stock photo. Ida – Stock.adobe.com

An Openai researcher came to say that “there is a group of people, one of them, who believe that AGI’s construction will cause a rapture. Literally, a rapture.”

Although Sutskever refused to comment on this topic, the idea of ​​a safe refuge for scientists who develop AGI emphasizes the extraordinary concerns that take some of the minds behind the most powerful technology in the world.

According to the author, Sutskever has been considered as a kind of mystic in Openai, known for discussing the AI ​​in moral and even metaphysical terms.

At the same time, it is also one of the most technically equipped minds behind Chatgpt and other large language models that have promoted the company to a global role.

Sutskever, who left Openai after helping to depose -Sam Altman as CEO before Altman returned to the position, proves him to be his brain behind Chatgpt. Ap

In recent years, Sutskever has begun to divide his time between accelerating the AI ​​capabilities and promoting Ai’s safety, according to colleagues.

The idea of ​​triggering civilizing disorder is not isolated for Sutskever.

In May 2023, Openai CEO SAM Altman co-called a public letter warning that AI technologies could be a “risk of extinction” for humanity. But while the letter sought to shape regulatory debates, Bunker’s talk suggests deeper and more personal fears among Openai’s leadership.

The tension between these fears and the aggressive Openai commercial ambitions was later in 2023 when Sutskever, along with the head -of -head technology officer, Mira Murati, helped orchestrate a brief coup of boards that expelled Altman from the company.

Sources told Hao to the belief that Altman set aside internal security protocols and consolidating too much control over the company’s future.

Sutskever, once a firm believing in the original Openai mission to develop AGI for the benefit of humanity, had grown more and more disappointed.

He and Murati told the members of the Council that Altman were no longer trusting to guide the organization responsible for their ultimate goal.

In 2023, Mira Murati, a Sutskever officer and then CAP, helped orchestrate a brief coup from the Board Room, which was expelled by CEO SAM Altman (above) of the company. Atmosphere

“I don’t think Sam is the guy who should have the finger on the AGI button,” said Sutskever, according to the notes reviewed by Hao.

The Council’s decision to eliminate Altman was short.

Within a few days, the pressure of investors, employees and Microsoft resulted in their reinstatement. Both Sutskever and Murati finally left the company.

The proposed bunker, although never announced or formally planned, has symbolized the end of the belief between the privileged AI.

It captures the magnitude of what Openai’s leaders themselves fear that their technology could trigger and the lengths that some were ready to wait for what they saw as a new transformative age, possibly cataclysmine.

The publication has sought comments from Openai and Sutskever.

#Openai #founder #wanted #build #Doomsday #bunker #protect #scientists #Rapure #company #Book
Image Source : nypost.com

Leave a Comment