Google from Alphabet Inc. this year has tightened control over its scientists' papers by launching a review on "sensitive topics", and according to internal communications and interviews, in at least three cases the requested authors refrain from putting the technology in a negative light. with researchers involved in the work.
Google's new review process requires researchers to consult with legal, policy and public relations teams before engaging in such topics as facial and sentiment analysis and categorization of race, gender or political affiliation, according to internal webpages outlining the policy. explained.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly harmless projects raise ethical, reputational, regulatory or legal issues,” said one of the pages for research personnel. Reuters was unable to determine the date of the post, although three current employees said the policy was launched in June.
Google declined to comment on this story.
The "sensitive topics" process adds additional scrutiny to Google's standard review of documents for pitfalls such as disclosing trade secrets, eight current and former employees said.
For some projects, Google officials intervened at later stages. A senior Google manager who discussed a study into content recommendation technology shortly before publication this summer told authors they should "be careful to strike a positive note," internal correspondence to Reuters said.
The manager added, "This doesn't mean we should hide from the real challenges" of the software.
Subsequent correspondence from a researcher with reviewers shows that authors "have been updated to remove all references to Google products." One concept seen by Reuters talked about YouTube owned by Google.
Four staff investigators, including senior scientist Margaret Mitchell, said they think Google is starting to interfere with pivotal investigations into potential technological damage.
"If, given our expertise, we investigate the right thing, and we're not allowed to publish that on grounds inconsistent with high-quality peer review, then we run into a serious problem of censorship," Mitchell said.
Google states on its public-facing website that its scientists have "substantial" freedom.
Tensions between Google and some of its staff erupted this month after the abrupt departure of scientist Timnit Gebru, who, along with Mitchell, led a 12-person team focused on ethics in artificial intelligence software (AI).
Gebru says Google fired her after she questioned an order not to publish a study claiming that artificial intelligence that mimics speech could harm marginalized populations. Google said it accepted and accelerated her resignation. It could not be determined whether Gebru's paper underwent a review on "sensitive topics".
Google Senior Vice President Jeff Dean said in a statement this month that Gebru's newspaper drew attention to potential damage without discussing ongoing efforts to address it.
Dean added that Google supports AI ethics grants and "is actively working to improve our paper assessment processes because we know that too many checks and balances can become cumbersome."
The explosion in research and development of AI in the tech industry has prompted authorities in the United States and elsewhere to propose rules for its use. Some have cited scientific studies showing that facial analysis software and other AI can perpetuate bias or compromise privacy.
In recent years, Google has integrated AI into its services, using the technology to interpret complex queries, decide recommendations on YouTube, and auto-complete sentences in Gmail. The researchers published more than 200 papers on responsible development of AI in the past year, among more than 1,000 projects in total, Dean said.
According to an internal webpage, one of the & # 39; sensitive topics & # 39; studying Google services for bias is. under the company's new policies. Dozens of other "sensitive topics" cited were the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecom, and systems that recommend or personalize web content.
The Google paper for which authors were told to strike a positive note discusses recommendation AI, which services like YouTube use to personalize users' content feeds. A design reviewed by Reuters featured & # 39; concerns & # 39; that this technology has "disinformation, discriminatory or otherwise unfair results" and & # 39; insufficient diversity of content & # 39; can promote and also lead to & # 39; political polarization & # 39 ;.
The latest publication says instead that the systems can promote "accurate information, fairness and diversity of content." The published version entitled “What are you optimizing for? Aligning recommendation systems to human values, ”the honor of Google researchers dropped. Reuters could not figure out why.
A paper this month on AI for understanding a foreign language softened a reference to how the Google Translate product made mistakes at the request of business reviewers, a source said. The published version says the authors used Google Translate, and a separate sentence says that part of the research method was to "review and fix inaccurate translations."
For a paper published last week, a Google employee described the process as a "long-term" process, involving more than 100 email exchanges between researchers and reviewers, the internal correspondence said.
The researchers found that AI can cough up personal data and copyrighted material – including a page from a "Harry Potter" novel – pulled from the Internet to develop the system.
One draft described how such disclosures could infringe copyrights or violate European privacy law, a person familiar with the matter said. After company reviews, the authors removed the legal risks and Google published the paper.
(Reporting by Paresh Dave and Jeffrey Dastin; edited by Jonathan Weber and Edward Tobin)