AI workers demand stronger whistleblower protections in open letter (2024)

A group of current and former employees from leading AI companies like OpenAI, Google DeepMind and Anthropic has signed an open letter asking for greater transparency and protection from retaliation for those who speak out about the potential concerns of AI. “So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter, which was published on Tuesday, says. “Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.”

The letter comes just a couple of weeks after a Vox investigation revealed OpenAI had attempted to muzzle recently departing employees by forcing them to chose between signing an aggressive non-disparagement agreement, or risk losing their vested equity in the company. After the report, OpenAI CEO Sam Altman said that he had been genuinely embarrassed" by the provision and claimed it has been removed from recent exit documentation, though it's unclear if it remains in force for some employees. After this story was published, nn OpenAI spokesperson told Engadget that the company had removed a non-disparagement clause from its standard departure paperwork and released all former employees from their non-disparagement agreements.

The 13 signatories include former OpenAI employees Jacob Hinton, William Saunders and Daniel Kokotajlo. Kokotajlo said that he resigned from the company after losing confidence that it would responsibly build artificial general intelligence, a term for AI systems that is as smart or smarter than humans. The letter — which was endorsed by prominent AI experts Geoffrey Hinton, Yoshua Bengio and Stuart Russell — expresses grave concerns over the lack of effective government oversight for AI and the financial incentives driving tech giants to invest in the technology. The authors warn that the unchecked pursuit of powerful AI systems could lead to the spread of misinformation, exacerbation of inequality and even the loss of human control over autonomous systems, potentially resulting in human extinction.

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all areas,” wrote Kokotajlo on X. “Meanwhile, there is little to no oversight over this technology. Instead, we rely on the companies building them to self-govern, even as profit motives and excitement about the technology push them to ‘move fast and break things.’ Silencing researchers and making them afraid of retaliation is dangerous when we are currently some of the only people in a position to warn the public.”

In a statement shared with Engadget, an OpenAI spokesperson said: “We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.” They added: “This is also why we have avenues for employees to express their concerns including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company.”

Google and Anthropic did not respond to request for comment from Engadget. In a statement sent to Bloomberg, an OpenAI spokesperson said the company is proud of its “track record providing the most capable and safest AI systems" and it believes in its "scientific approach to addressing risk.” It added: “We agree that rigorous debate is crucial given the significance of this technology and we'll continue to engage with governments, civil society and other communities around the world.”

The signatories are calling on AI companies to commit to four key principles:

  • Refraining from retaliating against employees who voice safety concerns

  • Supporting an anonymous system for whistleblowers to alert the public and regulators about risks

  • Allowing a culture of open criticism

  • And avoiding non-disparagement or non-disclosure agreements that restrict employees from speaking out

The letter comes amid growing scrutiny of OpenAI's practices, including the disbandment of its "superalignment" safety team and the departure of key figures like co-founder Ilya Sutskever and Jan Leike, who criticized the company's prioritization of "shiny products" over safety.

Update, June 05 2024, 11:51AM ET: This story has been updated to include statements from OpenAI.

AI workers demand stronger whistleblower protections in open letter (2024)

FAQs

What is the open letter from Openai employees? ›

Titled “A Right to Warn about Advanced Artificial Intelligence,” the letter emphasizes that the ability of current and former employees of AI companies to blow the whistle is critical to oversight of AI and the development and implementation of new technology in ways that directly benefit the public.

What is the open AI warning letter? ›

The letter asks AI companies to commit to not entering or enforcing non-disparagement agreements; to create anonymous processes for current and former employees to voice concerns to a company's board, regulators and others; to support a culture of open criticism; and to not retaliate against public whistleblowing if ...

What are the risks of OpenAI AI? ›

It further warns of risks from unregulated AI, ranging from the spread of misinformation to the loss of independent AI systems and the deepening of existing inequalities, which could result in "human extinction."

How many OpenAI employees signed the letter? ›

The open letter, signed by 13 former OpenAI employees (six of whom chose to remain anonymous) and endorsed by “Godfather of AI” Geoffrey Hinton, formerly of Google, says that in the absence of any effective government oversight, AI companies should commit to open criticism principles.

What is the problem with OpenAI employees? ›

A group of current and former OpenAI employees have issued a public letter warning that the company and its rivals are building artificial intelligence with undue risk, without sufficient oversight, and while muzzling employees who might witness irresponsible activities.

How do you tell if a message is AI-generated? ›

However, there are still signs you can look for to help you spot AI-generated text. Inconsistencies and repetition: Occasionally, AI produces nonsensical or odd sentences which can be a clear indicator of AI-generated text.

How much do OpenAI employees get paid? ›

OpenAI Salaries
Job TitleMedian Total Salary
Recruiter Salary$230,000
Accountant Salary$150,750
Marketing Salary$211,050
Product Design Manager Salary$311,550
3 more rows

Why are OpenAI employees leaving? ›

Last month, former employee and former Superalignment team member William Saunders revealed that he had quit the company after realizing that it was pursuing profit at the cost of safety.

Who are the signatories of the OpenAI letter? ›

Signatories
  • Yoshua Bengio.
  • Stuart Russell.
  • Elon Musk.
  • Steve Wozniak.
  • Yuval Noah Harari.
  • Emad MostaqueCEO, Stability AI.
  • Andrew Yang.
  • John J HopfieldPrinceton University, Professor Emeritus, inventor of associative neural networks.
Mar 22, 2023

Are OpenAI employees threaten to leave for Microsoft if sam altman isn t reinstated? ›

OpenAI employees threaten to quit en masse after former CEO Sam Altman joins Microsoft. Hundred of employees signed a letter calling on the board to step down or risk mass resignations if Altman and former President Greg Brockman weren't reinstated.

Top Articles
Instant Pot Chicken Noodle Soup - Easy and Healthy Recipe
Easy Pita Bread Ideas + Over 40 Recipes for Pita Pockets
Fat Hog Prices Today
Ub Civil Engineering Flowsheet
Hello Alice Business Credit Card Limit Hard Pull
Globe Position Fault Litter Robot
Vardis Olive Garden (Georgioupolis, Kreta) ✈️ inkl. Flug buchen
Santa Clara Valley Medical Center Medical Records
Lqse-2Hdc-D
Mid90S Common Sense Media
Cooktopcove Com
978-0137606801
Available Training - Acadis® Portal
Games Like Mythic Manor
Houses and Apartments For Rent in Maastricht
Beebe Portal Athena
Quadcitiesdaily
Beverage Lyons Funeral Home Obituaries
Iu Spring Break 2024
Wsbtv Fish And Game Report
Greensboro sit-in (1960) | History, Summary, Impact, & Facts
Urban Dictionary Fov
At 25 Years, Understanding The Longevity Of Craigslist
Craigslist Northern Minnesota
Encore Atlanta Cheer Competition
Vadoc Gtlvisitme App
Calvin Coolidge: Life in Brief | Miller Center
Myra's Floral Princeton Wv
Ucm Black Board
Grand Teton Pellet Stove Control Board
Urban Blight Crossword Clue
Nail Salon Open On Monday Near Me
The Land Book 9 Release Date 2023
Are you ready for some football? Zag Alum Justin Lange Forges Career in NFL
Chuze Fitness La Verne Reviews
Bitchinbubba Face
The disadvantages of patient portals
10 games with New Game Plus modes so good you simply have to play them twice
Smith And Wesson Nra Instructor Discount
Philadelphia Inquirer Obituaries This Week
Nearest Ups Office To Me
Fetus Munchers 1 & 2
Vons Credit Union Routing Number
VPN Free - Betternet Unlimited VPN Proxy - Chrome Web Store
At Home Hourly Pay
814-747-6702
National Weather Service Richmond Va
Az Unblocked Games: Complete with ease | airSlate SignNow
Victoria Vesce Playboy
Willkommen an der Uni Würzburg | WueStart
Madden 23 Can't Hire Offensive Coordinator
antelope valley for sale "lancaster ca" - craigslist
Latest Posts
Article information

Author: Mr. See Jast

Last Updated:

Views: 5794

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Mr. See Jast

Birthday: 1999-07-30

Address: 8409 Megan Mountain, New Mathew, MT 44997-8193

Phone: +5023589614038

Job: Chief Executive

Hobby: Leather crafting, Flag Football, Candle making, Flying, Poi, Gunsmithing, Swimming

Introduction: My name is Mr. See Jast, I am a open, jolly, gorgeous, courageous, inexpensive, friendly, homely person who loves writing and wants to share my knowledge and understanding with you.