This article is from the source 'nytimes' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.nytimes.com/2020/01/07/technology/facebook-says-it-will-ban-deepfakes.html

The article has changed 6 times. There is an RSS feed of changes available.

Version 4 Version 5
Facebook Says It Will Ban ‘Deepfakes’ Facebook Says It Will Ban ‘Deepfakes’
(32 minutes later)
WASHINGTON — Facebook said on Monday that it would ban videos that are heavily manipulated by artificial intelligence, known as deepfakes, from its platform. WASHINGTON — Facebook says it will ban videos that are heavily manipulated by artificial intelligence, the latest in a string of changes by the company to combat the flow of false information on its site.
In a blog post, a company executive said Monday evening that the social network would remove videos altered by artificial intelligence in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” A company executive said in a blog post late Monday that the social network would remove videos, often called deepfakes, that artificial intelligence has altered in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” The videos will also be banned in ads.
The policy will not extend to parody or satire, the executive, Monika Bickert, said, nor will it apply to videos edited to omit or change the order of words. The policy will have a limited effect on slowing the spread of false videos, since the vast majority are edited in more traditional ways: cutting out context or changing the order of words. The policy will not extend to those videos, or to parody or satire, said the executive, Monika Bickert.
Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. And content that is found to be factually incorrect appear less prominently on the site’s news feed and is labeled false. Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. Content that is found to be factually incorrect appears less prominently on the site’s news feed and is labeled false.
But the announcement underscores how Facebook, by far the world’s largest social network, is trying to thwart one of the latest tricks used by purveyors of disinformation ahead of this year’s presidential election. False information spread furiously on the platform during the 2016 campaign, leading to widespread criticism of the company.
By banning deepfakes before the technology becomes widespread, Facebook is trying to calm lawmakers, academics and political campaigns who remain frustrated by how the company handles political posts and videos about politics and politicians.
But some Democratic politicians said the new policy did not go nearly far enough. Last year, Facebook refused to take down a video that was edited to make it appear that Speaker Nancy Pelosi was slurring her words. At the time, the company defended its decision despite furious criticism, saying that it had subjected the video to its fact-checking process and had reduced its reach on the social network.
The new policy, though, does not apply to the video of Ms. Pelosi. Disinformation researchers have referred to similar videos as “cheapfakes” or “shallowfakes,” or deceptive content edited with simple video-editing software, in contrast to the more sophisticated deepfake videos generated by artificial intelligence.
Ms. Pelosi’s deputy chief of staff, Drew Hammill, said in a statement that Facebook “wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”
Facebook would also keep up a video, widely circulated last week, in which a long response that former Vice President Joseph R. Biden Jr. gave to a voter in New Hampshire was heavily edited to wrongly suggest that he made racist remarks.
Bill Russo, deputy communications director of Mr. Biden’s presidential campaign, said Facebook’s new policy was not meant “to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress.”
“Banning deepfakes should be an incredibly low floor in combating disinformation,” Mr. Russo said.
The company’s new policy was first reported by The Washington Post.The company’s new policy was first reported by The Washington Post.
Facebook was heavily criticized last year for refusing to take down an altered video of Speaker Nancy Pelosi that had been edited to make it appear as though she was slurring her words. At the time, the company defended its decision, saying it had subjected the video to its fact-checking process and had reduced its reach on the social network. Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online.
It did not appear that the new policy would have changed the company’s handling of the video with Ms. Pelosi. Deepfakes have become much more prevalent in recent months, especially on social media. And they have already begun challenging the public’s assumptions about what is real and what is not.
The announcement comes ahead of a hearing before the House Energy & Commerce Committee on Wednesday morning, during which Ms. Bickert, Facebook’s vice president of global policy management, is expected to testify on “manipulation and deception in the digital age,” alongside other experts. Last year, for instance, a Facebook video released by the government of Gabon, a country in central Africa, was meant to show proof of life for its president, who was out of the country for medical care. But the president’s critics claimed it was fake.
Because Facebook is still the No. 1 platform for sharing false political stories, according to disinformation researchers, the urgency to spot and halt novel forms of digital manipulation before they spread is paramount. In December 2017, the technology site Motherboard reported that people were using A.I. technology to graft the heads of celebrities onto nude bodies in pornographic videos. Websites like Pornhub, Twitter and Reddit suppressed the videos, but according to the research firm Deeptrace Labs, these videos still made up 96 percent of deepfakes found in the last year.
Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online. And false political information is circulating rapidly online ahead of the 2020 presidential elections in the United States. Tech companies are researching new techniques to detect deepfake videos and stop their spread on social media, even as the technology to create them quickly evolves. Last year, Facebook participated in a “Deepfake Detection Challenge” and, along with other tech firms like Google and Microsoft, offered a bounty for outside researchers who develop the best tools and techniques to identify A.I.-generated deepfake videos.
In late December, Facebook announced it had removed hundreds of accounts, including pages, groups and Instagram feeds, meant to fool users in the United States and Vietnam with fake profile photos generated with the help of artificial intelligence. Because Facebook is the No. 1 platform for sharing false political stories, according to disinformation researchers, it has an added urgency to spot and halt novel forms of digital manipulation. Renée DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, pointed out that a challenge of the policy was that the deepfake content “is likely to have already gone viral prior to any takedown or fact check.”
On Wednesday, Ms. Bickert, Facebook’s vice president of global policy management, is expected join other experts to testify on “manipulation and deception in the digital age” before the House Energy and Commerce Committee.
Ms. DiResta urged lawmakers to “delve into the specifics around how quickly the company envisions it could detect or respond to a viral deepfake, or to the ‘shallowfakes’ material which it won’t take down but has committed to fact-checking.”
Subbarao Kambhampati, a professor of computer science at Arizona State University, described Facebook’s effort to detect deepfakes as “a moving target.” He said that Facebook’s automated systems for detecting such videos would have limited reach, and that there would be “significant incentive” for people to develop fakes that fool Facebook’s systems.
There are many ways to manipulate videos with the help of artificial intelligence, added Matthias Niessner, a professor of computer science at the Technical University of Munich, who works with Google on its deepfake research. There are deepfake videos in which faces are swapped, for instance, or in which a person’s expression and lip movement are altered, he said.
“The question is where you draw the line,” Mr. Niessner said. “Eventually, it raises the question of intent and semantics.”
David McCabe reported from Washington, and Davey Alba from New York.
David McCabe reported from Washington, and Davey Alba from New York.David McCabe reported from Washington, and Davey Alba from New York.