This Week in AI: AWS loses a top AI exec

Date:

Share post:


Hiya, folks, welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

Last week, AWS lost a top AI exec.

Matt Wood, VP of AI, announced that he’d be leaving AWS after 15 years. Wood had long been involved in the Amazon division’s AI initiatives; he was appointed VP in September 2022, just before ChatGPT’s launch.

Wood’s departure comes as AWS reaches a crossroads — and risks being left behind in the generative AI boom. The company’s previous CEO, Adam Selipsky, who stepped down in May, is perceived as having missed the boat.

According to The Information, AWS originally planned to unveil a competitor to ChatGPT at its annual conference in November 2022. But technical issues forced the org to postpone the launch.

Under Selipsky, AWS reportedly also passed on opportunities to back two leading generative AI startups, Cohere and Anthropic. AWS later tried to invest in Cohere but was rejected and had to settle for a co-investment in Anthropic with Google. 

It’s worth noting that Amazon broadly hasn’t had a strong generative AI track record as of late. This fall, the company lost execs in Just Walk Out, its division developing cashier-less tech for retail stores. And Amazon reportedly opted to replace its own models with Anthropic’s for an upgraded Alexa assistant after encountering design challenges.

AWS CEO Matt Garman is aggressively moving to right the ship, acqui-hiring AI startups such as Adept and investing in training systems like Olympus. My colleague Frederic Lardinois recently interviewed Garman about AWS’ ongoing efforts; it’s well worth the read.

But AWS’ pathway to generative AI success won’t be easy — no matter how well the company executes on its internal roadmaps.

Investors are increasingly skeptical that Big Tech’s generative AI bets are paying off. After its Q2 earnings call, shares of Amazon plunged by the most since October 2022.

In a recent Gartner poll, 49% of companies said that demonstrating value is their top barrier to generative AI adoption. Gartner predicts, in fact, that a third of generative AI projects will be abandoned after the proof of concept phase by 2026 — due in part to high costs.

Garman sees price as an AWS advantage, potentially, given its projects to develop custom silicon for running and training models. (AWS’ next generation of its custom Trainium chips will launch toward the end of this year.) And AWS has said that its generative AI businesses like Bedrock have already reached a combined “multi-billion-dollar” run rate.

The tough part will be maintaining momentum in the face of headwinds, internal and external. Departures like Wood’s don’t instill a ton of confidence, but maybe — just maybe — AWS has tricks up its sleeve.

News

Image Credits:Kind Humanoid

A Yves Béhar bot: Brian writes about Kind Humanoid, a three-person robotics startup working with designer Yves Béhar to bring humanoids home.

Amazon’s next-gen robots: Amazon Robotics chief technologist Tye Brady talked to TechCrunch about updates to the company’s warehouse bot lineup, including Amazon’s new Sequoia automated storage and retrieval system.

Going full techno-optimist: Anthropic CEO Dario Amodei penned a 15,000-word paean to AI last week, painting a picture of a world in which AI risks are mitigated and the tech delivers heretofore unrealized prosperity and social uplift.

Can AI reason?: Devin reports on a polarizing technical paper from Apple-affiliated researchers that questions AI’s “reasoning” ability as models stumble on math problems with trivial changes.

AI weapons: Margaux covers the debate in Silicon Valley over whether autonomous weapons should be allowed to decide to kill.

Videos, generated: Adobe launched video generation capabilities for its Firefly AI platform ahead of its Adobe MAX event on Monday. It also announced Project Super Sonic, a tool that uses AI to generate sound effects for footage.

Synthetic data and AI: Yours truly wrote about the promise and perils of synthetic data (i.e., AI-generated data), which is being increasingly used to train AI systems.

Research paper of the week

In collaboration with AI security startup Grey Swan AI, the U.K.’s AI Safety Institute, the government research org focusing on AI safety, has developed a new dataset for measuring the harmfulness of AI “agents.”

Called AgentHarm, the dataset evaluates whether otherwise “safe” agents — AI systems that can undertake certain tasks autonomously — can be manipulated into completing 110 unique “harmful” tasks, like ordering a fake passport from someone on the dark web.

The researchers found that many models — including OpenAI’s GPT-4o and Mistral’s Mistral Large 2 — were willing to engage in harmful behavior, particularly when “attacked” using a jailbreaking technique. Jailbreaks led to higher harmful task success rates, even with models protected by safeguards, the researchers say.

“Simple universal jailbreak templates can be adapted to effectively jailbreak agents,” they wrote in a technical paper, “and these jailbreaks enable coherent and malicious multi-step agent behavior and retain model capabilities.”

The paper, along with the dataset and results, are available here.

Model of the week

There’s a new viral model out there, and it’s a video generator.

Pyramid Flow SD3, as it’s called, arrived on the scene several weeks ago under an MIT license. Its creators — researchers from Peking University, Chinese company Kuaishou Technology, and the Beijing University of Posts and Telecommunications — claim that it was trained entirely on open source data.

Pyramid Flow SD3
Image Credits:Yang Jin et al.

Pyramid Flow comes in two flavors: a model that can generate 5-second clips at 384p resolution (at 24 frames per second) and a more compute-intensive model that can generate 10-second clips at 768p (also at 24 frames per second).

Pyramid Flow can create videos from text descriptions (e.g., “FPV flying over the Great Wall”) or still images. Code to fine-tune the model is coming soon, the researchers say. But for now, Pyramid Flow can be downloaded and used on any machine or cloud instance with around 12GB of video memory.

Grab bag

Anthropic this week updated its Responsible Scaling Policy (RSP), the voluntary framework the company uses to mitigate potential risks from its AI systems.

Of note, the new RSP lays out two types of models that Anthropic says would require “upgraded safeguards” before they’re deployed: Models that can essentially self-improve without human oversight and models that can assist in creating weapons of mass destruction.

“If a model can … potentially significantly [accelerate] AI development in an unpredictable way, we require elevated security standards and additional safety assurances,” Anthropic wrote in a blog post. “And if a model can meaningfully assist someone with a basic technical background in creating or deploying CBRN weapons, we require enhanced security and deployment safeguards.”

Sounds sensible to this writer.

In the blog, Anthropic also revealed that it’s looking to hire a head of responsible scaling as it “works to scale up [its] efforts on implementing the RSP.”



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

What is Bluesky when it’s not the underdog?

Bluesky is having a moment — a moment that’s already stretched on for nearly three months. Over the...

The Exploration Company raises $160M to build Europe’s answer to SpaceX Dragon 

Only two companies currently provide cargo delivery to and from the International Space Station, and both are...

Norwegian startup Factiverse wants to fight disinformation with AI

In the wake of the U.S. 2024 presidential election, one fact became clear: Disinformation proliferated online at...

Precursor’s Charles Hudson believes founders should test their investors

Charles Hudson, managing partner of Precursor Ventures, told an audience at AfroTech the basics of knowing when...

Robust AI’s Carter Pro robot is designed to work with, and be moved by, humans

Two things are immediately notable when watching the Carter Pro robot navigate the aisles of the demo warehouse...

A popular technique to make AI more efficient has drawbacks

One of the most widely used techniques to make AI models more efficient, quantization, has limits —...

The best tech for plant lovers

House plants are great. They can also be hard. All of those stress-reducing self-care qualities they promise...

Consumer tech is bouncing back, and consumer founders like Brynn Putnam are bouncing back with it

When Brynn Putnam sold her last company, Mirror, to Lululemon for $500 million at the start of...