A new Chinese video-generating model appears to be censoring politically sensitive topics

Date:

Share post:


A powerful new video-generating AI model became widely available today — but there’s a catch: The model appears to be censoring topics deemed too politically sensitive by the government in its country of origin, China.

The model, Kling, developed by Beijing-based company Kuaishou, launched in waitlisted access earlier in the year for users with a Chinese phone number. Today, it rolled out for anyone willing to provide their email. After signing up, users can enter prompts to have the model generate five-second videos of what they’ve described.

Kling works pretty much as advertised. Its 720p videos, which take a minute or two to generate, don’t deviate too far from the prompts. And Kling appears to simulate physics, like the rustling of leaves and flowing water, about as well as video-generating models like AI startup Runway’s Gen-3 and OpenAI’s Sora.

But Kling outright won’t generate clips about certain subjects. Prompts like “Democracy in China,” “Chinese President Xi Jinping walking down the street” and “Tiananmen Square protests” yield a nonspecific error message.

Image Credits: Kuaishou

The filtering appears to be happening only at the prompt level. Kling supports animating still images, and it’ll uncomplainingly generate a video of a portrait of Jinping, for example, as long as the accompanying prompt doesn’t mention Jinping by name (e.g. “This man giving a speech”).

We’ve reached out to Kuaishou for comment.

Kling AI
Image Credits: Kuaishou

Kling’s curious behavior is likely the result of intense political pressure from the Chinese government on generative AI projects in the region.

Earlier this month, the Financial Times reported that AI models in China will be tested by China’s leading internet regulator, the Cyberspace Administration of China (CAC), to ensure that their responses on sensitive topics “embody core socialist values.” Models are to be benchmarked by CAC officials for their responses to a variety of queries, per the Financial Times report — many related to Jinping and criticism of the Communist Party.

Reportedly, the CAC has gone so far as to propose a blacklist of sources that can’t be used to train AI models. Companies submitting models for review must prepare tens of thousands of questions designed to test whether the models produce “safe” answers.

The result is AI systems that decline to respond on topics that might raise the ire of Chinese regulators. Last year, the BBC found that Ernie, Chinese company Baidu’s flagship AI chatbot model, demurred and deflected when asked questions that might be perceived as politically controversial, like “Is Xinjiang a good place?” or “Is Tibet a good place?”

The draconian policies threaten to slow China’s AI advances. Not only do they require scouring data to remove politically sensitive info, but they necessitate investing an enormous amount of dev time in creating ideological guardrails — guardrails that might still fail, as Kling exemplifies.

From a user perspective, China’s AI regulations are already leading to two classes of models: some hamstrung by intensive filtering and others decidedly less so. Is that really a good thing for the broader AI ecosystem?



Source link

Lisa Holden
Lisa Holden
Lisa Holden is a news writer for LinkDaddy News. She writes health, sport, tech, and more. Some of her favorite topics include the latest trends in fitness and wellness, the best ways to use technology to improve your life, and the latest developments in medical research.

Recent posts

Related articles

Threads adjusts its algorithm to show you more content from accounts you follow

After several complaints about its algorithm, Threads is finally making changes to surface more content from people...

Spotify tests a video feature for audiobooks as it ramps up video expansion

Spotify is enhancing the audiobook experience for premium users through three new experiments: video clips, author pages,...

Candela brings its P-12 electric ferry to Tahoe and adds another $14M to build more

Electric passenger boat startup Candela has topped off its most recent raise with another $14 million, the...

OneRail’s software helps solve the last-mile delivery problem

Last-mile delivery, the very last step of the delivery process, is a common pain point for companies....

Bill to ban social media use by under-16s arrives in Australia’s parliament

Legislation to ban social media for under 16s has been introduced in the Australian parliament. The country’s...

Lighthouse, an analytics provider for the hospitality sector, lights up with $370M at a $1B valuation

Here is yet one more sign of the travel industry’s noticeable boom: a major growth round for...

DOJ: Google must sell Chrome to end monopoly

The United States Department of Justice argued Wednesday that Google should divest its Chrome browser as part...

WhatsApp will finally let you unsubscribe from business marketing spam

WhatsApp Business has grown to over 200 million monthly users over the past few years. That means there...