This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.
You can find the current article at its original source at https://www.nytimes.com/2023/12/06/technology/ai-regulation-policies.html
The article has changed 5 times. There is an RSS feed of changes available.
Version 1 | Version 2 |
---|---|
How Nations Are Losing a Global Race to Tackle A.I.’s Harms | How Nations Are Losing a Global Race to Tackle A.I.’s Harms |
(32 minutes later) | |
When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology. | When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology. |
E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc. | E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc. |
Then came ChatGPT. | Then came ChatGPT. |
The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage. | The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage. |
Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law. | Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law. |
Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence. Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works. | Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence. Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works. |
The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research. | The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research. |
At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace. That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits. | |
Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers. | |
The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems. A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear. | |
“The jury is still out about whether you can regulate this technology or not,” said Andrea Renda, a senior research fellow at the Center for European Policy Studies, a think tank in Brussels. “There’s a risk this E.U. text ends up being prehistorical.” | |
The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems. Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another. | |
Without united action soon, some officials warned, governments may get further left behind by the A.I. makers and their breakthroughs. | |
“No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.” | |
In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems. | |
The group debated whether there were already enough European rules to protect against the technology and considered potential ethics guidelines, said Nathalie Smuha, a legal scholar in Belgium who coordinated the group. | |
But as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?” she said. | |
In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society. | |
The report rippled through the insular world of E.U. policymaking. Ursula von der Leyen, the president of the European Commission, made the topic a priority on her digital agenda. A 10-person group was assigned to build on the group’s ideas and draft a law. Another committee in the European Parliament, the European Union’s co-legislative branch, held nearly 50 hearings and meetings to consider A.I.’s effects on cybersecurity, agriculture, diplomacy and energy. | |
In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied. | |
So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous. | |
Under the proposal, organizations offering risky A.I. tools must meet certain requirements to ensure those systems are safe before being deployed. A.I. software that created manipulated videos and “deepfake” images must disclose that people are seeing A.I.-generated content. Other uses were banned or restricted, such as live facial recognition software. Violators could be fined 6 percent of their global sales. | |
Some experts warned that the draft law did not account enough for A.I.’s future twists and turns. | |
“They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.” | |
E.U. leaders were undeterred. | |
“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels. | |
Nineteen months later, ChatGPT arrived. | |
The European Council, another branch of the European Union, had just agreed to regulate general purpose A.I. models, but the new chatbot reshuffled the debate. It revealed a “blind spot” in the bloc’s policymaking over the technology, said Dragos Tudorache, a member of the European Parliament who had argued before ChatGPT’s release that the new models must be covered by the law. These general purpose A.I. systems not only power chatbots but can learn to perform many tasks by analyzing data culled from the internet and other sources. | |
E.U. officials were divided over how to respond. Some were wary of adding too many new rules, especially as Europe has struggled to nurture its own tech companies. Others wanted more stringent limits. | |
“We want to be careful not to underdo it, but not overdo it as well and overregulate things that are not yet clear,” said Mr. Tudorache, a lead negotiator on the A.I. Act. | |
By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted. | |
Policymakers were still working on compromises as negotiations over the law’s language entered a final stage this week. | |
A European Commission spokesman said the A.I. Act was “flexible relative to future developments and innovation friendly.” | |
Jack Clark, a founder of the A.I. start-up Anthropic, had visited Washington for years to give lawmakers tutorials on A.I. Almost always, just a few congressional aides showed up. | |
But after ChatGPT went viral, his presentations became packed with lawmakers and aides clamoring to hear his A.I. crash course and views on rule making. | |
“Everyone has sort of woken up en masse to this technology,” said Mr. Clark, whose company recently hired two lobbying firms in Washington. | |
Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules. | |
“We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.” | |
Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year. | |
In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India. | |
In Washington, the activity around A.I. has been frenetic — but with no legislation to show for it. | |
In May, after a White House meeting about A.I., the leaders of Microsoft, OpenAI, Google and Anthropic were asked to draw up self-regulations to make their systems safer, said Brad Smith, Microsoft’s president. After Microsoft submitted suggestions, the commerce secretary, Gina M. Raimondo, sent the proposal back with instructions to add more promises, he said. | |
Two months later, the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing. | |
“It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’” | |
In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.” | |
Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives. | |
In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty. | |
Mr. Schumer said the companies knew the technology best. | |
In some cases, A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead. | |
“China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January. | |
In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy. | |
After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.” | |
Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own. | |
Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology. | |
Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation. | |
“Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.” | |
Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said. | |
A European Commission spokesman said that the United States and Europe had “worked together closely” on A.I. policy and that the Group of 7 countries unveiled a voluntary code of conduct in October. | |
A State Department spokesman said there had been “ongoing, constructive conversations” with the European Union, including the G7 accord. At the meeting in Sweden, he added, Mr. Blinken emphasized the need for a “unified approach” to A.I. | |
Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others. | |
The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year. | |
The talks, in the end, produced a deal to keep talking. |