'The world isn’t ready, and we aren’t ready. And I’m concerned we are rushing forward regardless and rationalizing our actions': OpenAI employees sound the alarm ahead of Apple partnership
Apple is expected to unveil its big plans for AI on iOS at its annual Worldwide Developers Conference in less than a week, but it may be in for a rude awakening.
Rumors surrounding a partnership between Apple and OpenAI have reached a boiling point, with Apple reportedly planning to officially announce the deal at WWDC. Unfortunately for Tim Cook's team in Cupertino, OpenAI is currently embroiled in serious controversy surrounding the safety of its AI products.
On Tuesday, a group of current and former OpenAI employees released an open letter calling for a greater focus on safety in the AI industry. Following similar comments by former board members and company leaders, the group also revealed details about the company's lack of a culture of safety.
How can Apple maintain its reputation as a privacy leader when its main AI partner is facing accusations of sidelining safety? Here's what OpenAI's safety controversy could mean for the future of AI at Apple.
Apple rumored to announce OpenAI partnership at WWDC
In the months leading up to WWDC 2024, Apple's rivals announced new AI services, from Google's AI Overviews to Microsoft's Copilot+ PCs. Apple hasn't made a similarly large leap into AI yet, but it will imminently. WWDC is expected to focus on AI features across iOS, iPadOS, and macOS.
According to Bloomberg's Mark Gurman, Apple will also be announcing its partnership with OpenAI, most likely at the keynote presentation on Monday.
This deal sheds new light on Apple's AI plans. Recent rumors indicate that Apple is mainly sticking to small, on-device features, like AI photo editing in its Photos app.
And yes, it seems like Siri will get a boost from OpenAI.
It's unclear is the extent to which OpenAI will be involved with the new and improved Siri. As Gurman reported, citing an interview with Dag Kittlaus, the creator of Siri, Apple likely plans for this partnership to be a short-term arrangement while it gets its own in-house AI tech up to speed.
So, OpenAI might simply provide consultation or temporary access to its GPT-4 algorithm. However, it's unlikely Apple will do anything as drastic as replacing Siri with ChatGPT.
Unfortunately for Apple, its partnership with OpenAI couldn't be happening at a worse time. OpenAI and its CEO, Sam Altman, have been mired in controversy over recent weeks, and things just got even more complicated thanks to an open letter from a group of current and former OpenAI employees.
Could OpenAI derail Apple's AI ambitions?
On Tuesday, 11 current and former OpenAI employees and two current and former Google DeepMind employees released an open letter calling for a greater focus on safety and transparency in the AI industry.
The letter calls out a lack of concern for safety among the leading AI companies. It claims that "AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this."
It also claims that, without strict non-disparagement agreements, there are not enough protections in place for whistleblowers who might speak out about safety concerns.
The same day the letter was released, its members also spoke up in an interview with the New York Times, where they shed more light on what motivated them to go public with their concerns.
The world isn’t ready, and we aren’t ready.
Daniel Kokotajlo, a former researcher at OpenAI and one of the employees who signed the open letter, explained, "OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there."
Kokotajlo was referring to Artificial General Intelligence, an ambitious form of AI with human-level intelligence. AGI doesn't exist yet, but OpenAI wants to change that. Unfortunately, Kokotajlo and the other current and former employees who signed the letter this week are not the first to accuse OpenAI of leaving safety behind in its race to develop AGI.
Last month, two of OpenAI's top safety leaders quit, former chief scientist Ilya Sutskever and former head of alignment Jan Leike.
Sutskever was previously part of an attempt to oust Sam Altman as OpenAI's CEO but hasn't revealed much about why he chose to leave the company.
However, Leike explained his reasons for quitting in a thread on X, specifically citing safety concerns. According to Leike, "safety culture and processes have taken a backseat to shiny products" at OpenAI.
This controversy creates an image of OpenAI as a company pushing for the development of AGI at the expense of users' safety.
How can Apple maintain its position as a privacy leader in consumer tech with OpenAI as its partner? In light of these accusations, Apple may have to go to great lengths to reassure its users that the new AI tech in their iPhones, iPads, and Macs is safe and private.
Apple was previously betting on its massive user base to make its foray into AI a success. Now, Apple may need to ensure it doesn't get caught in the fallout of the safety controversy at OpenAI. Of course, if OpenAI strives to listen to its former employees' concerns and make safety a top priority again, both companies could avert a crisis.
We'll have to wait and see how OpenAI and Apple respond to the situation in the days and weeks ahead. We will cover all of the latest developments in Apple's OpenAI partnership and WWDC announcements, so stay tuned for more details.