ChatGPT now understands real-time video, seven months after OpenAI first demoed it

Date:

Share post:


OpenAI has finally released the real-time video capabilities for ChatGPT that it demoed nearly seven months ago.

On Thursday during a livestream, the company said that Advanced Voice Mode, its human-like conversational feature for ChatGPT, is getting vision. Using the ChatGPT app, users subscribed to ChatGPT Plus, Team, and Pro can point their phones at objects and have ChatGPT respond in near-real-time.

Advanced Voice Mode with vision can also understand what’s on a device’s screen, via screen sharing. It can explain various settings menus, for example, or give suggestions on a math problem.

To access Advanced Voice Mode with vision, tap the voice icon next to the ChatGPT chat bar, then tap the video icon at the bottom left, which will start video. To screen-share, tap the three-dot menu and select “Share Screen.”

The rollout of Advanced Voice Mode with vision will start today, OpenAI says, and wrap up in the next week. But not all users will get access. OpenAI says that ChatGPT Enterprise and Edu subscribers won’t get the feature until January, and that it has no timeline for ChatGPT users in the EU, Switzerland, Iceland, Norway, and Liechtenstein.

In a recent demo on CNN’s 60 Minutes, OpenAI president Greg Brockman had Advanced Voice Mode with vision quiz Anderson Cooper on his anatomy skills. As Cooper drew body parts on a blackboard, ChatGPT could “understand” what he was drawing.

Image Credits:OpenAI

“The location is spot on,” the assistant said. “The brain is right there in the head. As for the shape, it’s a good start. The brain is more of an oval.”

In that same demo, Advanced Voice Mode with vision made a mistake on a geometry problem, however — suggesting that it’s prone to hallucinating.

Advanced Voice Mode with vision has been delayed multiple times — reportedly in part because OpenAI announced the feature far before it was production-ready. In April, OpenAI promised that Advanced Voice Mode would roll out to users “within a few weeks.” Months later, the company said it needed more time.

When Advanced Voice Mode finally arrived in early fall for some ChatGPT users, it lacked the visual analysis component. In the lead-up to today’s launch, OpenAI has focused most of its attention on bringing the voice-only Advanced Voice Mode experience to additional platforms and users in the EU.

In addition to Advance Voice Mode with vision, OpenAI launched a festive “Santa Mode,” which adds Santa’s voice as a preset voice in ChatGPT Advanced Voice Mode. Users can find it by tapping or clicking the snowflake icon in ChatGPT next to the prompt bar.



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Threads rolls out its own version of Bluesky’s ‘Starter Packs’

Meta’s Threads is rolling out its own take on Bluesky’s “Starter Packs,” which are curated lists of...

You can make ChatGPT sound like Santa Claus for the holidays

OpenAI is making ChatGPT sound like Santa for the holidays. The startup announced on Thursday that ChatGPT users...

Google wraps ‘blue links’ search test, lobbies for less maximalist application of EU’s DMA

Google has ended a test in which it was returning basic “blue link” search results for hotel-related...

Microsoft anchors $9B renewable energy coalition

Big tech companies spent several months this year touting their investments in nuclear power. Now, the tide...

United and Air Canada can now use Apple AirTags to track lost luggage

Arriving in time for holiday travel and potentially lost luggage, a new feature that allows Apple AirTag...

Realize Music’s ‘Sing’ wants you to sing out loud by yourself in VR because it just feels good

People don’t sing as much as they used to, and Mike Wilson wants to change that. The...

Harvard and Google to release 1 million public-domain books as AI training dataset

AI training data has a big price tag, one best-suited for deep-pocketed tech firms. This is why...

Cartesia claims its AI is efficient enough to run pretty much anywhere

It’s becoming increasingly costly to develop and run AI. OpenAI’s AI operations costs could reach $7 billion this year,...