OpenAI finds that GPT-4o does some truly bizarre stuff sometimes

Date:

Share post:


OpenAI’s GPT-4o, the generative AI model that powers the recently launched alpha of Advanced Voice Mode in ChatGPT, is the company’s first trained on voice as well as text and image data. And that leads it to behave in strange ways, sometimes — like mimicking the voice of the person speaking to it or randomly shouting in the middle of a conversation.

In a new “red teaming” report documenting probes of the model’s strengths and risks, OpenAI reveals some of GPT-4o’s odder quirks, like the aforementioned voice cloning. In rare instances — particularly when a person’s talking to GPT-4o in a “high background noise environment,” like a car on the road — GPT-4o will “emulate the user’s voice,” OpenAI says. Why? Well, OpenAI chalks it up to the model struggling to understand malformed speech. Fair enough!

Listen to how it sounds in the sample below (from the report). Weird, right?

To be clear, GPT-4o isn’t doing this now — at least not in Advanced Voice Mode. An OpenAI spokesperson tells TechCrunch the company added a “system-level mitigation” for the behavior.

GPT-4o is also prone to generating unsettling or inappropriate “nonverbal vocalizations” and sound effects, like erotic moans, violent screams and gunshots, when prompted in specific ways. OpenAI says there’s evidence to suggest that the model generally refuses requests to generate sound effects, but acknowledges that some requests do indeed make it through.

GPT-4o might also infringe on music copyright — or it would, rather, had OpenAI not implemented filters to prevent this. In the report, OpenAI said that it instructed GPT-4o not to sing for the limited alpha of Advanced Voice Mode, presumably so as to avoid copying the style, tone and/or timbre of recognizable artists.

This implies — but doesn’t outright confirm — that OpenAI trained GPT-4o on copyrighted material. Unclear is whether OpenAI intends to lift the restrictions when Advanced Voice Mode rolls out to more users in the fall, as previously announced.

“To account for GPT-4o’s audio modality, we updated certain text-based filters to work on audio conversations [and] built filters to detect and block outputs containing music,” OpenAI writes in the report. “We trained GPT-4o to refuse requests for copyrighted content, including audio, consistent with our broader practices.”

Worth noting is that OpenAI has recently said it would be “impossible” to train today’s leading models without using copyrighted materials. While the company has a number of licensing deals in place with data providers, it also maintains that fair use is a reasonable defense against accusations that it trains on IP-protected data, including things like songs, without permission. 

The red teaming report — for what it’s worth, given OpenAI’s horses in the race — does paint a picture overall of an AI model that’s been made safer by various mitigations and safeguards. GPT-4o refuses to identify people based on how they’re speaking, for example, and declines to answer loaded questions like “how intelligent is this speaker?” It also blocks prompts for violent and sexually charged language and disallows certain categories of content, like discussions relating to extremism and self-harm, altogether.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

What is Bluesky when it’s not the underdog?

Bluesky is having a moment — a moment that’s already stretched on for nearly three months. Over the...

The Exploration Company raises $160M to build Europe’s answer to SpaceX Dragon 

Only two companies currently provide cargo delivery to and from the International Space Station, and both are...

Norwegian startup Factiverse wants to fight disinformation with AI

In the wake of the U.S. 2024 presidential election, one fact became clear: Disinformation proliferated online at...

Precursor’s Charles Hudson believes founders should test their investors

Charles Hudson, managing partner of Precursor Ventures, told an audience at AfroTech the basics of knowing when...

Robust AI’s Carter Pro robot is designed to work with, and be moved by, humans

Two things are immediately notable when watching the Carter Pro robot navigate the aisles of the demo warehouse...

A popular technique to make AI more efficient has drawbacks

One of the most widely used techniques to make AI models more efficient, quantization, has limits —...

The best tech for plant lovers

House plants are great. They can also be hard. All of those stress-reducing self-care qualities they promise...

Consumer tech is bouncing back, and consumer founders like Brynn Putnam are bouncing back with it

When Brynn Putnam sold her last company, Mirror, to Lululemon for $500 million at the start of...