• Check out my latest feature on Ausbiz discussing AI's current winners and losers WATCH HERE

OpenAI arrives at the crossroads of progress and safety

OpenAI arrives at the crossroads of progress and safety

OpenAI has recently made waves with its launch of cutting-edge speech technology, and underscoring the company’s shift in focus. While this marks an impressive leap in artificial intelligence (AI) capabilities, it also raises critical concerns that the race to release new products may be sidelining safety measures.

Originally established as a nonprofit research lab, OpenAI is also undergoing a transformation, increasingly developing products to appeal to investors ahead of a major capital raising.

Indeed, this week, OpenAI announced it had completed its long-anticipated funding round, raising US$6.6 billion in the largest venture capital deal of all time (exceeding $6 billion raised earlier this year by Elon Musk’s xAI), which values the company at US$157 billion. Joshua Kushner’s Thrive Capital led the round, and was joined by Microsoft (NASDAQ:MSFT), Nvidia (NASDAQ:NVDA), SoftBank (TYO:9984), Khosla Ventures, Altimeter Capital, Fidelity, Tiger Global and MGX. Apple (NASDAQ:AAPL), who had been in talks to invest, did not participate.

This pivot marks a significant change from its earlier days when research and safety were at the forefront. Now, the company’s growing investor base is steering it towards a more commercially driven approach (surprise, surprise), aiming to keep up with the competitive tech landscape.

What happened: a showcase of new capabilities

During a recent developer event in San Francisco, OpenAI unveiled a suite of new functionalities, including a provocative real-time speech feature, which is now accessible to developers. This tool allows integration of the same advanced voice capabilities that ChatGPT has been using, providing developers with novel ways to interact with users.

To demonstrate its potential, OpenAI showcased the technology with an AI agent autonomously placing a call to a fictional store, ordering 400 chocolate-covered strawberries. This demonstration simultaneously stirred excitement about AI’s practical potential while simultaneously sparking a wave of concerns regarding its obvious potential for misuse by bad actors. Already, I expect nefarious organisations and individuals will be conspiring to apply the technology to scam and harm others.

Concerns and reactions: safety in the spotlight

Rather than delving into the technical marvels of the new capability, observers, including journalists, pressed OpenAI executives about the safeguards in place to prevent malicious use. Surprisingly, OpenAI seemed unprepared for this line of questioning, offering only general assurances and motherhood statements. The company clarified that while they would enforce their terms of service to curb spam and fraud, they had no plans to watermark AI-generated voices or require developers to disclose AI usage beyond what was “obvious from the context.” Scary!

Internal friction: the speed vs. safety debate

The recent release underscores a growing internal debate within OpenAI. Reports have surfaced revealing that some OpenAI product teams are pushing for speed, even when concerns over safety testing are raised. For example, The Wall Street Journal recently reported that OpenAI’s launch of its GPT-4o model earlier this year went ahead despite warnings from some internal members about inadequate testing and the potential risks of its deployment.

Similarly, Fortune reported on another heated debate over the release readiness of OpenAI’s o1 reasoning model, previously known as “Strawberry.” These incidents reflect the broader tensions within OpenAI as it grapples with balancing ambitious product development against the need for rigorous safety protocols.

Sadly, history shows that under a capitalist framework, being first and making money trumps concerns over misuse and safety.

A changing landscape: structural shifts at OpenAI

This internal friction is a function of significant restructuring within the company. From its nonprofit research-focused origins, OpenAI has evolved into a more product-centric business, partly due to the involvement of investors like Microsoft. Following CEO Sam Altman’s brief ousting and rehiring, OpenAI has been under pressure from investors to further transition into a for-profit entity, a process that is still underway.

As OpenAI gears up for a massive new funding round, insiders suggest the company has committed to becoming a full for-profit operation within the next two years to secure investor confidence. This shift has already prompted major changes in the organization’s workforce. Over the past year, the company has seen an influx of more than 1,000 new employees, while several original co-founders and key figures from the research and safety divisions, including Ilya Sutskever and Jan Leike, have departed. Just recently, Chief Research Officer Bob McGrew and VP of Research Barret Zoph announced their exits, alongside CTO Mira Murati.

OpenAI’s defense: safety still matters

Predictably, OpenAI rejects the notion that it is compromising safety. The company says it remains committed to releasing “the most capable and safest models in the industry.” Regarding the controversial GPT-4o release, OpenAI emphasised that it underwent a “deliberate and empirical safety process” before launch. The company pointed out that since its release, millions of people and developers have safely utilised the model, reinforcing OpenAI’s confidence in its risk assessment procedures.

Moreover, and also predictably, the retained members of OpenAI’s safety and security oversight committee expressed confidence in the company’s ability to “safely deliver AI that can solve harder problems”.

As always, time will tell.

INVEST WITH MONTGOMERY

Roger Montgomery is the Founder and Chairman of Montgomery Investment Management. Roger has over three decades of experience in funds management and related activities, including equities analysis, equity and derivatives strategy, trading and stockbroking. Prior to establishing Montgomery, Roger held positions at Ord Minnett Jardine Fleming, BT (Australia) Limited and Merrill Lynch.

This post was contributed by a representative of Montgomery Investment Management Pty Limited (AFSL No. 354564). The principal purpose of this post is to provide factual information and not provide financial product advice. Additionally, the information provided is not intended to provide any recommendation or opinion about any financial product. Any commentary and statements of opinion however may contain general advice only that is prepared without taking into account your personal objectives, financial circumstances or needs. Because of this, before acting on any of the information provided, you should always consider its appropriateness in light of your personal objectives, financial circumstances and needs and should consider seeking independent advice from a financial advisor if necessary before making any decisions. This post specifically excludes personal advice.

Why every investor should read Roger’s book VALUE.ABLE

NOW FOR JUST $49.95

find out more

SUBSCRIBERS RECEIVE 20% OFF WHEN THEY SIGN UP


Post your comments