This article is from the source 'rtcom' and was first published or seen on . It last changed over 40 days ago and won't be checked again for changes.

You can find the current article at its original source at https://www.rt.com/news/575106-employees-warned-google-bard-ai-implementation/

The article has changed 2 times. There is an RSS feed of changes available.

Version 0 Version 1
Employees warned Google about Bard AI – media Staff warned Google about Bard AI dangers – media
(about 3 hours later)
The chatbot issues life-threatening advice, a survey of workers revealsThe chatbot issues life-threatening advice, a survey of workers reveals
Thousands of Google employees tried to warn the company that its AI chatbot, Bard, is not just a “pathological liar,” but potentially a murderer, offering advice that could lead to death, 18 current and former Googlers told Bloomberg on Wednesday. The company, they said, did not listen. Thousands of Google employees tried to warn the company that its AI chatbot, Bard, is not just a “pathological liar,” but potentially a murderer, offering advice that could lead to death, Bloomberg reported on Wednesday. 18 current and former Google staff who spoke to the outlet added that the company did not listen.
One employee told the outlet they informed Google that Bard’s tips on scuba diving would “likely result in serious injury or death,” while another said its description of how to land a plane would almost certainly result in a crash. A third employee, writing in a note that was seen by 7,000 Google users, described Bard as “worse than useless,” and begged the company not to launch it. Others called the AI as “cringeworthy.” One employee claimed they informed Google management that Bard’s tips on scuba diving would “likely result in serious injury or death,” while another said its description of how to land a plane would almost certainly result in a crash.
Ethics specialists were told to get out of the way as the company rushed Bard to the market, the sources told Bloomberg. Meredith Whittaker, a former manager at the company, said, “AI ethics has taken a back seat” to profit and growth, and two other workers said ethics reviews are almost entirely optional. A third employee, in a note that was seen by 7,000 Google users, described Bard as “worse than useless,” and begged the company not to launch it. Others called the AI “cringeworthy.”
Ethics specialists were told to step down as the company rushed Bard to the market, the sources told Bloomberg. Meredith Whittaker, a former manager at the company, said, “AI ethics has taken a back seat” to profit and growth, and two other workers said ethics reviews are almost entirely optional.
Products that previously had to clear a 99% threshold for metrics such as ‘fairness’ now only had to measure up to “80, 85%, or something” in order to launch, AI governance head Jen Gennai reportedly told employees.Products that previously had to clear a 99% threshold for metrics such as ‘fairness’ now only had to measure up to “80, 85%, or something” in order to launch, AI governance head Jen Gennai reportedly told employees.
Two whistleblowers revealed earlier this month that they had tried to block Bard’s release, warning the AI was prone to “inaccurate and dangerous statements,” only to have their concerns squashed as Gennai recast the ‘product launch’ as an “experiment.” Earlier this month two whistleblowers revealed that they had tried to block Bard’s release, warning the AI was prone to “inaccurate and dangerous statements,” only to have their concerns squashed as Gennai recast the ‘product launch’ as an “experiment.”
Introduced as a competitor to OpenAI’s blockbuster ChatGPT last month, Bard was supposed to be the leading edge of Google’s transformation of its search business. When ChatGPT launched and Google did not yet have its own generative AI integrated into its products in a way that customers could actually use, the company allegedly panicked, issuing a ‘code red’ and embracing risk, starting in December, employees told Bloomberg. Introduced as a competitor to OpenAI’s blockbuster ChatGPT last month, Bard was supposed to be the leading edge of Google’s transformation of its search business. When ChatGPT launched Google did not have its own generative AI integrated into its products in a way that customers could actually use. The company allegedly panicked, issuing a ‘code red’ and embracing risk, starting in December, the employees told Bloomberg.
Google has denied that it ever put anything before ethics. “We are continuing to invest in the teams that work on applying our AI principles to our technology,” spokesperson Brian Gabriel told Bloomberg. Google has denied demoting ethics. “We are continuing to invest in the teams that work on applying our AI principles to our technology,” spokesperson Brian Gabriel told Bloomberg.
The company laid off three members of its ‘responsible AI’ team in January, announcing plans to roll out more than 20 AI-powered products this year. Meanwhile, those remaining in the AI ethics team are now “disempowered and demoralized,” according to Bloomberg.The company laid off three members of its ‘responsible AI’ team in January, announcing plans to roll out more than 20 AI-powered products this year. Meanwhile, those remaining in the AI ethics team are now “disempowered and demoralized,” according to Bloomberg.