This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.nytimes.com/2020/01/07/technology/facebook-says-it-will-ban-deepfakes.html

The article has changed 6 times. There is an RSS feed of changes available.

Version 2 Version 3
Facebook Says It Will Ban ‘Deepfakes’ Facebook Says It Will Ban ‘Deepfakes’
(about 2 hours later)
WASHINGTON — Facebook said that it would ban videos that were heavily manipulated by artificial intelligence, the latest in a string of changes by the company to combat the flow of false information on its site. WASHINGTON — Facebook says it will ban videos that are heavily manipulated by artificial intelligence, the latest in a string of changes by the company to combat the flow of false information on its site.
A company executive said in a blog post posted late Monday that the social network would remove videos altered by artificial intelligence, often called deepfakes, in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” The videos will also be banned in ads. A company executive said in a blog post late Monday that the social network would remove videos, often called deepfakes, that artificial intelligence has altered in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” The videos will also be banned in ads.
The policy will have a limited effect on slowing the spread of false videos, since the vast majority are edited in more traditional ways: cutting out context or changing the order of words. The policy will not extend to those videos, or to parody or satire, said the executive, Monika Bickert.The policy will have a limited effect on slowing the spread of false videos, since the vast majority are edited in more traditional ways: cutting out context or changing the order of words. The policy will not extend to those videos, or to parody or satire, said the executive, Monika Bickert.
Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. Content that is found to be factually incorrect appears less prominently on the site’s news feed and is labeled false.Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. Content that is found to be factually incorrect appears less prominently on the site’s news feed and is labeled false.
But the announcement by Facebook underscores how the social network, by far the world’s largest, is trying to thwart one of the latest tricks used by purveyors of disinformation ahead of this year’s presidential election. False information spread furiously on the platform during the 2016 campaign, leading to widespread criticism of the company. But the announcement underscores how Facebook, by far the world’s largest social network, is trying to thwart one of the latest tricks used by purveyors of disinformation ahead of this year’s presidential election. False information spread furiously on the platform during the 2016 campaign, leading to widespread criticism of the company.
By banning deepfakes before the technology becomes widespread, Facebook is attempting to calm lawmakers, academics and political campaigns who remain frustrated by how the company handles political posts and videos about politics and politicians. By banning deepfakes before the technology becomes widespread, Facebook is trying to calm lawmakers, academics and political campaigns who remain frustrated by how the company handles political posts and videos about politics and politicians.
But some Democratic politicians said the new policy does not go nearly far enough. Last year, Facebook refused to take down a video of Speaker Nancy Pelosi that was edited to make her appear to be slurring her words. At the time, the company defended its decision despite furious criticism, saying that it had subjected the video to its fact-checking process and had reduced its reach on the social network. But some Democratic politicians said the new policy did not go nearly far enough. Last year, Facebook refused to take down a video that was edited to make it appear that Speaker Nancy Pelosi was slurring her words. At the time, the company defended its decision despite furious criticism, saying that it had subjected the video to its fact-checking process and had reduced its reach on the social network.
The new policy, though, does not apply to the video of Ms. Pelosi. Disinformation researchers have referred to similar videos as “cheapfakes” or “shallowfakes,” or deceptive content edited with simple video-editing software, in contrast to the more sophisticated deepfakes videos generated by artificial intelligence. The new policy, though, does not apply to the video of Ms. Pelosi. Disinformation researchers have referred to similar videos as “cheapfakes” or “shallowfakes,” or deceptive content edited with simple video-editing software, in contrast to the more sophisticated deepfake videos generated by artificial intelligence.
Ms. Pelosi’s deputy chief of staff, Drew Hammill, said in a statement that Facebook “wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”Ms. Pelosi’s deputy chief of staff, Drew Hammill, said in a statement that Facebook “wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”
Facebook would also keep up a video that widely circulated last week, in which a long response that former Vice President Joseph R. Biden Jr. gave to a voter in New Hampshire was heavily edited to wrongly suggest that he made racist remarks. Facebook would also keep up a video, widely circulated last week, in which a long response that former Vice President Joseph R. Biden Jr. gave to a voter in New Hampshire was heavily edited to wrongly suggest that he made racist remarks.
Bill Russo, deputy communications director of Mr. Biden’s presidential campaign, said that Facebook’s new policy was not meant “to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress.” Bill Russo, deputy communications director of Mr. Biden’s presidential campaign, said Facebook’s new policy was not meant “to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress.”
“Banning deepfakes should be an incredibly low floor in combating disinformation,” Mr. Russo said.“Banning deepfakes should be an incredibly low floor in combating disinformation,” Mr. Russo said.
The company’s new policy was first reported by The Washington Post.The company’s new policy was first reported by The Washington Post.
Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online.Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online.
Deepfakes a term that generally describes videos doctored with cutting-edge artificial intelligence have become much more prevalent in recent months, especially on social media. And they have already begun challenging the public’s assumptions about what is real and what is not. Deepfakes have become much more prevalent in recent months, especially on social media. And they have already begun challenging the public’s assumptions about what is real and what is not.
Last year, for instance, a Facebook video released by the government of Gabon, a country in Central Africa, was meant to show proof of life for its president, who was out of the country for medical care. But the president’s critics claimed it was fake. Last year, for instance, a Facebook video released by the government of Gabon, a country in central Africa, was meant to show proof of life for its president, who was out of the country for medical care. But the president’s critics claimed it was fake.
In December 2017, the technology site Motherboard reported that people were using A.I. technologies to graft the heads of celebrities onto nude bodies in pornographic videos. Websites like Pornhub, Twitter and Reddit suppressed the videos, but according to the research firm Deeptrace Labs, these videos still made up 96 percent of deepfakes found in the last year. In December 2017, the technology site Motherboard reported that people were using A.I. technology to graft the heads of celebrities onto nude bodies in pornographic videos. Websites like Pornhub, Twitter and Reddit suppressed the videos, but according to the research firm Deeptrace Labs, these videos still made up 96 percent of deepfakes found in the last year.
Tech companies are researching new techniques to detect deepfake videos and stop their spread on social media, even as the technology to create them quickly evolves. Last year, Facebook participated in a “Deepfake Detection Challenge,” and along with other tech firms like Google and Microsoft, offered a bounty for outside researchers who develop the best tools and techniques to identify A.I.-generated deepfake videos. Tech companies are researching new techniques to detect deepfake videos and stop their spread on social media, even as the technology to create them quickly evolves. Last year, Facebook participated in a “Deepfake Detection Challenge” and, along with other tech firms like Google and Microsoft, offered a bounty for outside researchers who develop the best tools and techniques to identify A.I.-generated deepfake videos.
Because Facebook is the No. 1 platform for sharing false political stories, according to disinformation researchers, it has an added urgency to spot and halt novel forms of digital manipulation. Renee DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, pointed out that a challenge of the policy is that the deepfake content “is likely to have already gone viral prior to any takedown or fact check.” Because Facebook is the No. 1 platform for sharing false political stories, according to disinformation researchers, it has an added urgency to spot and halt novel forms of digital manipulation. Renée DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, pointed out that a challenge of the policy was that the deepfake content “is likely to have already gone viral prior to any takedown or fact check.”
On Wednesday, Ms. Bickert, Facebook’s vice president of global policy management, is expected join other experts to testify on “manipulation and deception in the digital age” before the House Energy and Commerce Committee.On Wednesday, Ms. Bickert, Facebook’s vice president of global policy management, is expected join other experts to testify on “manipulation and deception in the digital age” before the House Energy and Commerce Committee.
Ms. DiResta urged lawmakers to “delve into the specifics around how quickly the company envisions it could detect or respond to a viral deepfake, or to the ‘shallowfakes’ material which it won’t take down but has committed to fact-checking.”Ms. DiResta urged lawmakers to “delve into the specifics around how quickly the company envisions it could detect or respond to a viral deepfake, or to the ‘shallowfakes’ material which it won’t take down but has committed to fact-checking.”
Subbarao Kambhampati, a professor of computer science at Arizona State University, described Facebook’s effort to detect deepfakes as “a moving target.” He said Facebook’s automated systems for detecting such videos would have limited reach, and there would be “significant incentive” for people to develop fakes that would fool Facebook’s systems. Subbarao Kambhampati, a professor of computer science at Arizona State University, described Facebook’s effort to detect deepfakes as “a moving target.” He said that Facebook’s automated systems for detecting such videos would have limited reach, and that there would be “significant incentive” for people to develop fakes that fool Facebook’s systems.
There are many ways to manipulate videos with the help of artificial intelligence, added Matthias Niessner, a professor of computer science at the Technical University of Munich, who works with Google on its deepfake research. There are deepfake videos in which faces are swapped, for instance, or in which a person’s expression and lip movement are altered, he said.There are many ways to manipulate videos with the help of artificial intelligence, added Matthias Niessner, a professor of computer science at the Technical University of Munich, who works with Google on its deepfake research. There are deepfake videos in which faces are swapped, for instance, or in which a person’s expression and lip movement are altered, he said.
“The question is where you draw the line,” Mr. Niessner said. “Eventually, it raises the question of intent and semantics.”“The question is where you draw the line,” Mr. Niessner said. “Eventually, it raises the question of intent and semantics.”
David McCabe reported from Washington, and Davey Alba from New York.
David McCabe reported from Washington, and Davey Alba from New York.David McCabe reported from Washington, and Davey Alba from New York.