This article is more than
1 year oldBy Nico Grant and Karen Weise
Nico Grant reported this story from San Francisco, and Karen Weise reported from Seattle.
In March, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.
Ten months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.
The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbot woven into its Bing search engine. Google followed about six weeks later with its own chatbot, Bard.
The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industry’s next big thing — generative A.I., the powerful new technology that fuels those chatbots.
Read More (...)
Newer articles
<p>The two leaders have discussed the Ukraine conflict, with the German chancellor calling on Moscow to hold peace talks with Kiev</p>