This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.
You can find the current article at its original source at https://www.nytimes.com/2023/11/22/opinion/openai-sam-altman.html
The article has changed 6 times. There is an RSS feed of changes available.
Version 4 | Version 5 |
---|---|
The Unsettling Lesson of the OpenAI Mess | The Unsettling Lesson of the OpenAI Mess |
(about 20 hours later) | |
Science fiction writers and artificial intelligence researchers have long feared the machine you can’t turn off. The story goes something like this: A powerful A.I. is developed. Its designers are thrilled, then unsettled, then terrified. They go to pull the plug, only to learn the A.I. has copied its code elsewhere, perhaps everywhere. | Science fiction writers and artificial intelligence researchers have long feared the machine you can’t turn off. The story goes something like this: A powerful A.I. is developed. Its designers are thrilled, then unsettled, then terrified. They go to pull the plug, only to learn the A.I. has copied its code elsewhere, perhaps everywhere. |
Keep that story in mind for a moment. | Keep that story in mind for a moment. |
Two signal events have happened in A.I. in recent weeks. One of them you’ve heard about. The nonprofit that governs OpenAI, the makers of ChatGPT, fired Sam Altman, the company’s chief executive. The decision was unexpected and largely unexplained. “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” read a cryptic statement. | Two signal events have happened in A.I. in recent weeks. One of them you’ve heard about. The nonprofit that governs OpenAI, the makers of ChatGPT, fired Sam Altman, the company’s chief executive. The decision was unexpected and largely unexplained. “Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” read a cryptic statement. |
Many assumed Altman had lied to the board about OpenAI’s finances or safety data. But Brad Lightcap, an OpenAI executive, told employees that no such breach had occurred. “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety or security/privacy practices,” he wrote. “This was a breakdown in communication between Sam and the board.” | Many assumed Altman had lied to the board about OpenAI’s finances or safety data. But Brad Lightcap, an OpenAI executive, told employees that no such breach had occurred. “We can say definitively that the board’s decision was not made in response to malfeasance or anything related to our financial, business, safety or security/privacy practices,” he wrote. “This was a breakdown in communication between Sam and the board.” |
At the heart of OpenAI is — or perhaps was — a mission. A.I. was too powerful a technology to be controlled by profit-seeking corporations or power-seeking states. It needed to be developed by an organization, as Open AI’s charter puts it, “acting in the best interests of humanity.” OpenAI was built to be that organization. It began life as a nonprofit. When it became clear that developing A.I. would require tens or hundreds of billions of dollars, it rebuilt itself around a dual structure, where the nonprofit — controlled by a board chosen for its commitment to OpenAI’s founding mission — would govern the for-profit, which would raise the money and commercialize the A.I. applications necessary to finance the mission. | At the heart of OpenAI is — or perhaps was — a mission. A.I. was too powerful a technology to be controlled by profit-seeking corporations or power-seeking states. It needed to be developed by an organization, as Open AI’s charter puts it, “acting in the best interests of humanity.” OpenAI was built to be that organization. It began life as a nonprofit. When it became clear that developing A.I. would require tens or hundreds of billions of dollars, it rebuilt itself around a dual structure, where the nonprofit — controlled by a board chosen for its commitment to OpenAI’s founding mission — would govern the for-profit, which would raise the money and commercialize the A.I. applications necessary to finance the mission. |