OpenAI sends inner memo releasing former staff from NDAs

admin
By admin
5 Min Read

OpenAI CEO Sam Altman speaks through the Microsoft Construct convention at Microsoft headquarters in Redmond, Washington, on Might 21, 2024. 

Jason Redmond | AFP | Getty Pictures

OpenAI on Thursday backtracked on a controversial choice to, in impact, make former staff select between signing a non-disparagement settlement that will by no means expire, or holding their vested fairness within the firm.

The interior memo, which was considered by CNBC, was despatched to former staff and shared with present ones.

The memo, addressed to every former worker, stated that on the time of the individual’s departure from OpenAI, “you may have been informed that you were required to execute a general release agreement that included a non-disparagement provision in order to retain the Vested Units [of equity].”

“Regardless of whether you executed the Agreement, we write to notify you that OpenAI has not canceled, and will not cancel, any Vested Units,” acknowledged the memo, which was considered by CNBC.

The memo stated OpenAI can even not implement every other non-disparagement or non-solicitation contract gadgets that the worker could have signed.

“As we shared with employees, we are making important updates to our departure process,” an OpenAI spokesperson advised CNBC in an announcement.

“We have not and never will take away vested equity, even when people didn’t sign the departure documents. We’ll remove nondisparagement clauses from our standard departure paperwork, and we’ll release former employees from existing nondisparagement obligations unless the nondisparagement provision was mutual,” stated the assertion, including that former staff would learn of this as nicely.

“We’re incredibly sorry that we’re only changing this language now; it doesn’t reflect our values or the company we want to be,” the OpenAI spokesperson added.

Bloomberg first reported on the discharge from the non-disparagement provision. Vox first reported on the existence of the NDA provision.

The information comes amid mounting controversy for OpenAI over the previous week or so.

On Monday — one week after OpenAI debuted a variety of audio voices for ChatGPT — the corporate introduced it will pull one of many viral chatbot’s voices named “Sky.”

“Sky” created controversy for resembling the voice of actress Scarlett Johansson in “Her,” a film about synthetic intelligence. The Hollywood star has alleged that OpenAI ripped off her voice though she declined to allow them to use it.

“We’ve heard questions about how we chose the voices in ChatGPT, especially Sky,” the Microsoft-backed firm posted on X. “We are working to pause the use of Sky while we address them.”

Additionally final week, OpenAI disbanded its crew centered on the long-term dangers of synthetic intelligence only one yr after the corporate introduced the group, an individual aware of the state of affairs confirmed to CNBC on Friday.

The individual, who spoke to CNBC on situation of anonymity, stated a few of the crew members are being reassigned to a number of different groups throughout the firm.

The information got here days after each crew leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, introduced their departures. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

OpenAI’s Superalignment crew, which was fashioned final yr, has centered on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” On the time, OpenAI stated it will commit 20% of its computing energy to the initiative over 4 years.

The corporate didn’t present a touch upon the document and as a substitute directed CNBC to co-founder and CEO Sam Altman’s current put up on X, the place he shared that he was unhappy to see Leike go away and that the corporate had extra work to do.

On Saturday, OpenAI co-founder Greg Brockman posted an announcement attributed to each himself and Altman on X, asserting that the corporate has “raised awareness of the risks and opportunities of AGI [artificial general intelligence] so that the world can better prepare for it.”

Share This Article