This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.nytimes.com/2023/11/01/world/europe/uk-ai-summit-sunak.html

The article has changed 9 times. There is an RSS feed of changes available.

Version 1 Version 2
Can Global Leaders Get a Handle on A.I.? U.K. Summit Makes a Start Can Global Leaders Get a Handle on A.I.? U.K. Summit Makes a Start
(about 4 hours later)
In 1950, Alan Turing, the gifted British mathematician and code-breaker, published an academic paper. His aim, he wrote, was to consider the question, “Can machines think?”In 1950, Alan Turing, the gifted British mathematician and code-breaker, published an academic paper. His aim, he wrote, was to consider the question, “Can machines think?”
The answer runs to almost 12,000 words. But it ends succinctly: “We can only see a short distance ahead,” Mr. Turing wrote, “but we can see plenty there that needs to be done.”The answer runs to almost 12,000 words. But it ends succinctly: “We can only see a short distance ahead,” Mr. Turing wrote, “but we can see plenty there that needs to be done.”
More than seven decades on, that sentiment sums up the mood of many policymakers, researchers and tech leaders arriving on Wednesday at Britain’s A.I. Safety Summit, which Prime Minister Rishi Sunak hopes will position the country as a leader in the global race to harness and regulate artificial intelligence. More than seven decades on, that sentiment sums up the mood of many policymakers, researchers and tech leaders attending Britain’s A.I. Safety Summit on Wednesday, which Prime Minister Rishi Sunak hopes will position the country as a leader in the global race to harness and regulate artificial intelligence.
Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in unpredictable ways. Governments have scrambled to address the risks posed by the fast-evolving technology since last year’s release of ChatGPT, a humanlike chatbot that demonstrated how the latest models are advancing in powerful and unpredictable ways.
Future generations of A.I. systems could accelerate the diagnosis of disease, help combat climate change and streamline manufacturing processes, but also present significant dangers in terms of job losses, disinformation and national security. A British government report last week warned that advanced A.I. systems “may help bad actors perform cyberattacks, run disinformation campaigns and design biological or chemical weapons.”Future generations of A.I. systems could accelerate the diagnosis of disease, help combat climate change and streamline manufacturing processes, but also present significant dangers in terms of job losses, disinformation and national security. A British government report last week warned that advanced A.I. systems “may help bad actors perform cyberattacks, run disinformation campaigns and design biological or chemical weapons.”
Mr. Sunak has promoted this week’s event, which will convene governments, companies, researchers and civil society groups, as a chance to start developing global safety standards. Mr. Sunak promoted this week’s event, which convenes governments, companies, researchers and civil society groups, as a chance to start developing global safety standards. On Wednesday morning, the British government released a document called “The Bletchley Declaration,” signed by representatives from the 28 countries attending the event, which pledged international cooperation and continued dialogue on the issues of safety.
The two-day summit will be held at Bletchley Park, a countryside estate 50 miles north of London, where Mr. Turing helped crack the Enigma code used by the Nazis during World War II. Considered one of the birthplaces of modern computing, the location is a conscious nod to the prime minister’s hopes that Britain could be at the center of another world-leading initiative. “Many risks arising from A.I. are inherently international in nature, and so are best addressed through international cooperation,” the declaration said. “We resolve to work together in an inclusive manner to ensure human-centric, trustworthy and responsible A.I. that is safe, and supports the good of all.”
It fell short, however, of setting specific policy goals. A second meeting is scheduled to be held in six months in South Korea and a third in France in a year.
The two-day summit in Britain is being held at Bletchley Park, a countryside estate 50 miles north of London, where Mr. Turing helped crack the Enigma code used by the Nazis during World War II. Considered one of the birthplaces of modern computing, the location is a conscious nod to the prime minister’s hopes that Britain could be at the center of another world-leading initiative.
Bletchley is “evocative in that it captures a very defining moment in time, where great leadership was required from government but also a moment when computing was front and center,” said Ian Hogarth, a tech entrepreneur and investor who was appointed by Mr. Sunak to lead the government’s task force on A.I. risk, and who helped organize the summit. “We need to come together and agree on a wise way forward.”Bletchley is “evocative in that it captures a very defining moment in time, where great leadership was required from government but also a moment when computing was front and center,” said Ian Hogarth, a tech entrepreneur and investor who was appointed by Mr. Sunak to lead the government’s task force on A.I. risk, and who helped organize the summit. “We need to come together and agree on a wise way forward.”
With Elon Musk and other tech executives sitting in the audience, King Charles III delivered a video address in the opening session, which he recorded at Buckingham Palace before departing for a state visit to Kenya this week, noting, “we are witnessing one of the greatest technological leaps in the history of human endeavor.”
“There is a clear imperative to ensure that this rapidly evolving technology remains safe and secure. And because A.I. does not respect international boundaries, this mission demands international coordination and collaboration,” the king said.
Vice President Kamala Harris will take part in the meetings on behalf of the United States, and Ursula von der Leyen, the president of the European Commission, and Giorgia Meloni, the Italian prime minister, are also expected.Vice President Kamala Harris will take part in the meetings on behalf of the United States, and Ursula von der Leyen, the president of the European Commission, and Giorgia Meloni, the Italian prime minister, are also expected.
Representatives from China, a major developer of artificial intelligence that has been largely absent from many international discussions about governance, are slated to attend, along with delegates from 27 governments in total, including Brazil, India, Saudi Arabia and Ukraine. Representatives from China, a major developer of artificial intelligence that has been largely absent from many international discussions about governance, were in attendance, along with delegates from governments including Brazil, India, Saudi Arabia and Ukraine.
Wu Zhaohui, China’s vice minister of science and technology, said in a speech at the event that Beijing was willing to “enhance dialogue and communication” with other countries about A.I. safety. China is developing its own global initiative for A.I. governance, he said, adding that the technology is “uncertain, unexplainable and lacks transparency.”
In a speech on Friday, Mr. Sunak addressed criticism he had received from China hawks within his Conservative Party over the attendance of a delegation from Beijing. “Yes — we’ve invited China,” he said. “I know there are some who will say they should have been excluded. But there can be no serious strategy for A.I. without at least trying to engage all of the world’s leading A.I. powers. That might not have been the easy thing to do, but it was the right thing to do.”In a speech on Friday, Mr. Sunak addressed criticism he had received from China hawks within his Conservative Party over the attendance of a delegation from Beijing. “Yes — we’ve invited China,” he said. “I know there are some who will say they should have been excluded. But there can be no serious strategy for A.I. without at least trying to engage all of the world’s leading A.I. powers. That might not have been the easy thing to do, but it was the right thing to do.”
Executives from leading technology and A.I. companies, including Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI and Tencent, will attend the conference. Also sending representatives will be a number of civil society groups, among them Britain’s Ada Lovelace Institute and The Algorithmic Justice League, a nonprofit in Massachusetts. With development of leading A.I. systems concentrated in the U.S. and a small number of other countries, some attendees said regulations must account for the technology’s impact globally. Rajeev Chandrasekhar, a minister of technology representing India at the summit, said policies must be set by a “coalition of nations rather than just one country to two countries.”
In a surprise move, Mr. Sunak announced on Monday that he would take part in a live interview with Elon Musk, the billionaire tech mogul, on his social media platform X after the summit ends on Thursday. “By allowing innovation to get ahead of regulation, we open ourselves to the toxicity and misinformation and weaponization that we see on the internet today, represented by social media,” he said. “We certainly can agree today that is not what we should chart for the coming years in terms of A.I.”
Some analysts argue the conference could be heavier on symbolism than substance, with a number of key political leaders absent, including President Biden, Emmanuel Macron, the president of France, and Olaf Scholz, the chancellor of Germany. Executives from leading technology and A.I. companies, including Anthropic, Google DeepMind, IBM, Meta, Microsoft, Nvidia, OpenAI and Tencent, were attending the conference. Also sending representatives were a number of civil society groups, among them Britain’s Ada Lovelace Institute and the Algorithmic Justice League, a nonprofit in Massachusetts.
And many governments are moving forward with their own laws and regulations. In the United States, President Biden announced an executive order this week requiring A.I. companies to assess national security risks before releasing their technology to the public. The European Union’s “A.I. Act,” which could be finalized within weeks, represents a far-reaching attempt to govern the use of the technology and protect citizens from harm. China is also cracking down on how A.I. is used, including censoring chatbots. In a surprise move, Mr. Sunak announced on Monday that he would take part in a live interview with Mr. Musk, the billionaire tech mogul, on his social media platform X after the summit ends on Thursday.
Some analysts argue that the conference will be heavier on symbolism than substance, with a number of key political leaders absent, including President Biden, President Emmanuel Macron of France and Chancellor Olaf Scholz of Germany.
And many governments are moving forward with their own laws and regulations. In the United States, Mr. Biden announced an executive order this week requiring A.I. companies to assess national security risks before releasing their technology to the public. The European Union’s “A.I. Act,” which could be finalized within weeks, represents a far-reaching attempt to govern the use of the technology and protect citizens from harm. China is also cracking down on how A.I. is used, including censoring chatbots.
Britain, home to many universities where artificial intelligence research is being conducted, has taken a more hands-off approach. The government believes that existing laws and regulations are sufficient for now, while announcing a new A.I. Safety Institute that will evaluate and test new models.Britain, home to many universities where artificial intelligence research is being conducted, has taken a more hands-off approach. The government believes that existing laws and regulations are sufficient for now, while announcing a new A.I. Safety Institute that will evaluate and test new models.
Mr. Hogarth, whose team has negotiated early access to the models of several large A.I. companies to research their safety, said he believed the U.K. could play an important role in figuring out how governments could “capture the benefits of these technologies as well as putting guardrails around them.” Mr. Hogarth, whose team has negotiated early access to the models of several large A.I. companies to research their safety, said he believed that Britain could play an important role in figuring out how governments could “capture the benefits of these technologies as well as putting guardrails around them.”
In his speech last week, Mr. Sunak affirmed that Britain’s approach to the potential risks of the technology is “not to rush to regulate.”In his speech last week, Mr. Sunak affirmed that Britain’s approach to the potential risks of the technology is “not to rush to regulate.”
“How can we write laws that make sense for something we don’t yet fully understand?” he said.“How can we write laws that make sense for something we don’t yet fully understand?” he said.