This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.nytimes.com/2019/10/07/opinion/ai-research-funding.html

The article has changed 3 times. There is an RSS feed of changes available.

Version 0 Version 1
America’s Risky Approach to Artificial Intelligence America’s Risky Approach to Artificial Intelligence
(about 2 hours later)
The brilliant 2007 science fiction novel “The Three-Body Problem,” by the Chinese writer Liu Cixin, depicts the fate of civilizations as almost entirely dependent on winning grand races to scientific milestones. Someone in China’s leadership must have read that book, for Beijing has made winning the race to artificial intelligence a national obsession, devoting billions of dollars to the cause and setting 2030 as the target year for world dominance. Not to be outdone, President Vladimir Putin of Russia recently declared that whoever masters A.I. “will become the ruler of the world.” The brilliant 2014 science fiction novel “The Three-Body Problem,” by the Chinese writer Liu Cixin, depicts the fate of civilizations as almost entirely dependent on winning grand races to scientific milestones. Someone in China’s leadership must have read that book, for Beijing has made winning the race to artificial intelligence a national obsession, devoting billions of dollars to the cause and setting 2030 as the target year for world dominance. Not to be outdone, President Vladimir Putin of Russia recently declared that whoever masters A.I. “will become the ruler of the world.”
To be sure, the bold promises made by A.I.’s true believers can seem excessive; today’s A.I. technologies are useful only in narrow situations. But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.To be sure, the bold promises made by A.I.’s true believers can seem excessive; today’s A.I. technologies are useful only in narrow situations. But if there is even a slim chance that the race to build stronger A.I. will determine the future of the world — and that does appear to be at least a possibility — the United States and the rest of the West are taking a surprisingly lackadaisical and alarmingly risky approach to the technology.
The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. Those businesses, it is hoped, will research, develop and disseminate the most important basic technologies of the future. Companies like Google, Apple and Microsoft are formidable entities, with great talent and resources that approximate those of small countries. But they don’t have the resources of large countries, nor do they have incentives that fully align with the public interest.The plan seems to be for the American tech industry, which makes most of its money in advertising and selling personal gadgets, to serve as champions of the West. Those businesses, it is hoped, will research, develop and disseminate the most important basic technologies of the future. Companies like Google, Apple and Microsoft are formidable entities, with great talent and resources that approximate those of small countries. But they don’t have the resources of large countries, nor do they have incentives that fully align with the public interest.
To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.To exaggerate slightly: If this were 1957, we might as well be hoping that the commercial airlines would take us to the moon.
If the race for powerful A.I. is indeed a race among civilizations for control of the future, the United States and European nations should be spending at least 50 times the amount they do on public funding of basic A.I. research. Their model should be the research that led to the internet, funded by the Advanced Research Projects Agency, created by the Eisenhower administration and arguably the most successful publicly funded science project in American history.If the race for powerful A.I. is indeed a race among civilizations for control of the future, the United States and European nations should be spending at least 50 times the amount they do on public funding of basic A.I. research. Their model should be the research that led to the internet, funded by the Advanced Research Projects Agency, created by the Eisenhower administration and arguably the most successful publicly funded science project in American history.
To their credit, companies like Google, Amazon, Microsoft and Apple are spending considerable money on advanced research. Google has been willing to lose about $500 million a year on DeepMind, an artificial intelligence lab, and Microsoft has invested $1 billion in its independent OpenAI laboratory. In these efforts they are part of a tradition of pathbreaking corporate laboratories like Bell Labs, Xerox’s Palo Alto Research Center and Cisco Systems in its prime.To their credit, companies like Google, Amazon, Microsoft and Apple are spending considerable money on advanced research. Google has been willing to lose about $500 million a year on DeepMind, an artificial intelligence lab, and Microsoft has invested $1 billion in its independent OpenAI laboratory. In these efforts they are part of a tradition of pathbreaking corporate laboratories like Bell Labs, Xerox’s Palo Alto Research Center and Cisco Systems in its prime.
But it would be a grave error to think that we can rest assured that Silicon Valley has it all taken care of. The history of computing research is a story not just of big corporate laboratories but also of collaboration and competition among civilian government, the military, academia and private players both big (IBM, AT&T) and small (Apple, Sun).But it would be a grave error to think that we can rest assured that Silicon Valley has it all taken care of. The history of computing research is a story not just of big corporate laboratories but also of collaboration and competition among civilian government, the military, academia and private players both big (IBM, AT&T) and small (Apple, Sun).
When it comes to research and development, each of these actors has advantages and limitations. Compared with government-funded research, corporate research, at its best, can offer a stimulating balance of theory and practice, yielding inventions like the transistor and the Unix operating system. But big companies can also be secretive, occasionally paranoid and sometimes just wrong, as with AT&T’s dismissal of internet technologies.When it comes to research and development, each of these actors has advantages and limitations. Compared with government-funded research, corporate research, at its best, can offer a stimulating balance of theory and practice, yielding inventions like the transistor and the Unix operating system. But big companies can also be secretive, occasionally paranoid and sometimes just wrong, as with AT&T’s dismissal of internet technologies.
Big companies can also change their priorities. Cisco, once an industry leader, has spent more than $129 billion in stock buybacks over the past 17 years, while its chief Chinese competitor, Huawei, developed the world’s leading 5G products.Big companies can also change their priorities. Cisco, once an industry leader, has spent more than $129 billion in stock buybacks over the past 17 years, while its chief Chinese competitor, Huawei, developed the world’s leading 5G products.
Some advocates of more A.I. research have called for a “Manhattan project” for A.I. — but that’s not the right model. The atomic bomb and the moon rocket were giant but discrete projects. In contrast, A.I. is a broad and vague set of scientific technologies that encompass not just recent trends in machine learning but also anything else designed to replicate or augment human cognition. We don’t necessarily want a single-minded fixation on a particular idea of what A.I. will become.Some advocates of more A.I. research have called for a “Manhattan project” for A.I. — but that’s not the right model. The atomic bomb and the moon rocket were giant but discrete projects. In contrast, A.I. is a broad and vague set of scientific technologies that encompass not just recent trends in machine learning but also anything else designed to replicate or augment human cognition. We don’t necessarily want a single-minded fixation on a particular idea of what A.I. will become.
As it did with the internet research, the United States government should broadly fund basic research and insist on broad dissemination, with the exception of tools that might be dangerous. Not all government funding need go to academia: Private research institutions like OpenAI that are committed to the principled development of A.I. and the broad dissemination of research findings should also be potential recipients.As it did with the internet research, the United States government should broadly fund basic research and insist on broad dissemination, with the exception of tools that might be dangerous. Not all government funding need go to academia: Private research institutions like OpenAI that are committed to the principled development of A.I. and the broad dissemination of research findings should also be potential recipients.
In addition, the United States needs to support immigration laws that attract the world’s top A.I. talent. The history of breakthroughs made by start-ups also suggests the need for policies, like the enforcement of antitrust laws and the defense of net neutrality, that give small players a chance.In addition, the United States needs to support immigration laws that attract the world’s top A.I. talent. The history of breakthroughs made by start-ups also suggests the need for policies, like the enforcement of antitrust laws and the defense of net neutrality, that give small players a chance.
It is telling that the computer scientist and entrepreneur Kai-Fu Lee, in his book “AI Super Powers: China, Silicon Valley, and the New World Order,” describes a race between China and Silicon Valley, as if the latter were the sum total of Western science in this area. In the future, when we look back at this period, we may come to regret the loss of a healthy balance between privately and publicly funded A.I. research in the West, and the drift of too much scientific and engineering talent into the private sector. It is telling that the computer scientist and entrepreneur Kai-Fu Lee, in his book “AI Superpowers: China, Silicon Valley, and the New World Order,” describes a race between China and Silicon Valley, as if the latter were the sum total of Western science in this area. In the future, when we look back at this period, we may come to regret the loss of a healthy balance between privately and publicly funded A.I. research in the West, and the drift of too much scientific and engineering talent into the private sector.
Tim Wu (@superwuster) is a law professor at Columbia, a contributing opinion writer and the author of “The Master Switch: The Rise and Fall of Information Empires.”Tim Wu (@superwuster) is a law professor at Columbia, a contributing opinion writer and the author of “The Master Switch: The Rise and Fall of Information Empires.”
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.