This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.nytimes.com/2023/12/08/briefing/ai-dominance.html

The article has changed 5 times. There is an RSS feed of changes available.

Version 1 Version 2
The Race to Dominate A.I. The Race to Dominate A.I.
(32 minutes later)
Just before Thanksgiving, a Silicon Valley giant appeared to implode before our eyes. A boardroom coup at OpenAI, the world’s hottest artificial intelligence company, pushed out its charismatic leader, Sam Altman.Just before Thanksgiving, a Silicon Valley giant appeared to implode before our eyes. A boardroom coup at OpenAI, the world’s hottest artificial intelligence company, pushed out its charismatic leader, Sam Altman.
At the time, the ouster — and Altman’s roller-coaster ride to reclaim his job as C.E.O. — seemed sudden. In reality, it was more than a decade in the making. A.I. had been simmering in the tech world, as powerful figures poured money into research and fought with one another over heady questions of humanity, philosophy and power.At the time, the ouster — and Altman’s roller-coaster ride to reclaim his job as C.E.O. — seemed sudden. In reality, it was more than a decade in the making. A.I. had been simmering in the tech world, as powerful figures poured money into research and fought with one another over heady questions of humanity, philosophy and power.
This week, with our colleagues Mike Isaac and Nico Grant, we published a series recounting the recent history of A.I. and looking ahead to its future. In today’s newsletter, we explain what we learned.This week, with our colleagues Mike Isaac and Nico Grant, we published a series recounting the recent history of A.I. and looking ahead to its future. In today’s newsletter, we explain what we learned.
Powerful tech leaders — including Altman, Elon Musk and the Google co-founder Larry Page — were developing A.I. systems for years before the technology went mainstream. The men bickered over whether it would end up harming the world; some, including Musk, feared that A.I. would turn dystopian science fiction into reality, with computers becoming smart enough to escape human control.Powerful tech leaders — including Altman, Elon Musk and the Google co-founder Larry Page — were developing A.I. systems for years before the technology went mainstream. The men bickered over whether it would end up harming the world; some, including Musk, feared that A.I. would turn dystopian science fiction into reality, with computers becoming smart enough to escape human control.
At the heart of these disagreements was a brain-stretching paradox: The men who said they were most worried about A.I. were among the most determined to create it. They justified that ambition by saying that they alone had the morals and skill to prevent A.I. tools from becoming rogue machines that could endanger humanity.At the heart of these disagreements was a brain-stretching paradox: The men who said they were most worried about A.I. were among the most determined to create it. They justified that ambition by saying that they alone had the morals and skill to prevent A.I. tools from becoming rogue machines that could endanger humanity.
Eventually, these disputes led them to split off and form their own A.I. labs. Each schism created more competition, which pushed the companies to advance A.I. even faster.Eventually, these disputes led them to split off and form their own A.I. labs. Each schism created more competition, which pushed the companies to advance A.I. even faster.
The newly formed A.I. labs improved their technology over years. But nothing captured the public’s attention like ChatGPT, OpenAI’s chatbot, which debuted last year. It was an enormous hit, attracting millions of users with its ability to write poetry, summarize research and mimic everyday conversation.
Our reporting found that Altman and OpenAI did not appreciate what they were about to unleash when they released ChatGPT. Internally, the company called the chatbot a “low key research preview.” Researchers and engineers at OpenAI were instead focused on developing more advanced technology.
ChatGPT’s popularity supercharged the competition at big tech companies like Google and Meta, Facebook’s parent company, which raced to get their own products into the world.
Though the companies were concerned that their A.I. chatbots were inaccurate or biased, they put those worries to the side — at least for the moment. As one Microsoft executive wrote in an internal email, “speed is even more important than ever.” It would be, he added, an “absolutely fatal error in this moment to worry about things that can be fixed later.”
A.I. has since sneaked into daily life, through chatbots and image generators, in the word processing programs you might use at work, and in the seemingly human customer service agents you chat with online to return a purchase. People have already used it to create sophisticated phishing emails, cheat on schoolwork and spread disinformation.
Though OpenAI was founded as a nonprofit, Altman transformed it into a commercial operation that investors now value at more than $80 billion. As Altman raced to advance the technology, some directors on the nonprofit’s board worried he was not being honest with them and felt they could no longer trust him to prioritize safety.
That one person could be so central to the future of A.I. — and perhaps humanity — is a symptom of the lack of meaningful oversight of the industry.
A.I. systems are advancing so rapidly and unpredictably that even on the rare occasions lawmakers and regulators have tried to tackle them, their proposals quickly become obsolete, as our colleagues Adam Satariano and Cecilia Kang found. For example, European regulators proposed “future proof” rules in mid-2021 that limited how A.I. could be used in sensitive cases, such as in hiring decisions and law enforcement. But the regulations did not contemplate the advances behind ChatGPT, which was released a year and a half later.
The absence of rules has left a vacuum. The leading A.I. companies have proposed some voluntary guidelines — like using watermarks to help consumers spot A.I.-generated material — but it’s not clear how much they will matter.
European regulators this week are in marathon sessions to write the world’s strictest A.I. regulations, and they will be worth watching. In the meantime, companies continue to push ahead. On Wednesday, Google demonstrated a powerful new A.I. system called Gemini Ultra, even though Google hasn’t yet completed its customary safety testing. The company promised it would be out in the world early next year.
Related: Artists are using A.I. to produce or augment their work. Read about one.
The Israeli military said it had detained hundreds of people suspected of terrorism, including Hamas fighters.
Israel has found information about Hamas’s attack plans on Oct. 7, as well as data about the group’s tactics and abilities.
Israel accused Hamas of firing rockets from designated humanitarian zones where thousands of Palestinians have sought refuge.
Criticism of Harvard, M.I.T. and Penn mounted after congressional testimony from their presidents about antisemitism. (Representative Elise Stefanik, a New York Republican, has gone viral after her questioning.)
Donald Trump’s lawyers appealed a judge’s ruling that he is not immune from prosecution, part of Trump’s effort to delay his Jan. 6 criminal trial until after the 2024 election.
A special counsel charged Hunter Biden with failing to pay taxes on millions in income.
A super PAC backing Ron DeSantis’s presidential bid is running ads that liken Nikki Haley to Hillary Clinton. Here’s a fact-check of the claims.
Hard-right House Republicans are once again angry at Speaker Mike Johnson — this time for making a deal with Democrats to strip conservative provisions from a defense bill.
The House censured Representative Jamaal Bowman, a New York Democrat, for setting off a fire alarm during a debate in September.
Biden tied Ukraine aid to border security, and it backfired on him, Zolan Kanno-Youngs writes.
China’s electric-car factories can’t hire fast enough to keep up with their rapid expansion.
Britain said Russia had targeted its lawmakers in cyberattacks for years.
A city-size iceberg is moving out of Antarctic waters and will eventually melt.
A Texas judge ruled that a woman whose fetus has a fatal condition could get an abortion, overriding the state’s strict ban. The Texas attorney general said the woman and hospital staff could still face prosecution.
In a lawsuit, survivors of a sex cult accused Sarah Lawrence College of negligence for allowing a predator into their dorm.
Catholic nuns with shares in Smith & Wesson are suing the gun company for selling an AR-15-style rifle.
Meteorologists expect an odd weekend of weather in the eastern U.S., with unseasonal warmth and heavy rain.
Canada’s new tech law makes the country a test case for a world where Google shares news without deciding which outlets succeed and which fail, Julia Angwin writes.
Universities must resolve a double standard: They either punish antisemitism or accept all offensive speech, Bret Stephens writes.
The House hearing on campus antisemitism confirmed people’s worst fears. But watching the whole hearing reveals the trap university presidents entered, Michelle Goldberg writes.
Heirloom: You can buy Hemingway’s typewriter. But would you use it?
Turtle transit: They washed ashore in Massachusetts. To save them, private planes are taking them south.
Scottish stink: This may be the world’s smelliest cheese.
Modern Love: Divorce taught a lesson — never rely on a man for money.
Lives Lived: Juanita Castro supported her brother Fidel when he led the uprising that toppled Cuba’s dictator in 1959. But she broke with him over his crackdown on dissent and went on to collaborate with the C.I.A. before fleeing Cuba in 1964. She died at 90.
N.F.L.: Bailey Zappe, an unlikely hero, led the Patriots to a 21-18 win over the Steelers.
Basketball: The Pacers and the Lakers will play for the first N.B.A. Cup on Saturday, after Los Angeles walloped New Orleans and Indiana edged Milwaukee in the semifinals.
Golf move: Jon Rahm is joining LIV.
Haute cuisine: Hundreds of Parisians stood in line at dawn Wednesday, awaiting their first bite of a delicacy: a Krispy Kreme doughnut. The pastry chain opened its first restaurant in France, joining a market where American chains like McDonald’s, Starbucks and Popeyes are thriving. “This is all about American pop culture,” said Alexandre Maizoué, the director general of Krispy Kreme France. “They’ve seen all the American series. They like U.S. culture and the American art de vivre.”
Sony said that ​purchased Discovery shows, including “MythBusters” and “Deadliest Catch,” would soon be deleted from PlayStation devices.
Benjamin Zephaniah, a poet who wrote about social justice issues and helped inspire a generation of British poets, died at 65.
Jon Fosse, who will receive the Nobel Prize in Literature this weekend, said that a childhood brush with death had influenced his literary work. Read a profile of him.
Late-night hosts slammed Vivek Ramaswamy for pushing conspiracy theories.
Bake a round of brie for your next party.
Make a great photo book.
Shovel snow with the right tools.
Take the news quiz.
Here is today’s Spelling Bee. Yesterday’s pangrams were conflict and infliction.
And here are today’s Mini Crossword, Wordle, Sudoku and Connections.
Thanks for spending part of your morning with The Times. See you tomorrow.
Sign up here to get this newsletter in your inbox. Reach our team at themorning@nytimes.com.