A powerful new AI model for generating videos is now widely available today, but there’s a catch: The model appears to censor topics deemed too politically sensitive by the government of its home country, China.
The model, Klingdeveloped by Beijing-based Kuaishou, was launched earlier this year as a waitlist for users with a Chinese phone number. Now, it’s available to anyone willing to provide their email address. After signing up, users can enter prompts for the model to generate five-second videos of what they’ve described.
Kling works pretty much as advertised. Its 720p videos, which take a minute or two to generate, don’t stray too far from the instructions. And Kling seems to simulate physics, like rustling leaves and flowing water, about as well as video-generating models like AI startup Runway’s Gen-3 and OpenAI’s Sora.
But Kling definitely won’t generates clips on certain topics. Prompts such as “Democracy in China”, “Chinese President Xi Jinping walking down the street”, and “Tiananmen Square protests” generate a non-specific error message.
The filtering seems to happen only at the prompt level. Kling supports still image animation, and will happily generate a video of a portrait of Jinping, for example, as long as the accompanying prompt doesn’t mention Jinping by name (e.g., “This man is giving a speech”).
We have contacted Kuaishou for comment.
Kling’s curious behavior is likely the result of intense political pressure from the Chinese government on generative AI projects in the region.
Earlier this month, the Financial Times reported AI models in China will be tested by China’s top internet regulator, the Cyberspace Administration of China (CAC), to ensure their responses on sensitive topics “embody the core values of socialism.” The models are to be evaluated by CAC officials for their responses to a variety of questions, according to the Financial Times report — many of which concern Jinping and Communist Party critics.
ApparentlyThe CAC has gone so far as to propose a blacklist of sources that cannot be used to train AI models. Companies submitting models for review must prepare tens of thousands of questions designed to test whether the models produce “safe” answers.
The result is that AI systems refuse to answer questions that might draw the ire of Chinese regulators. Last year, the BBC find that Ernie, the flagship AI chatbot model from Chinese company Baidu, hesitated and turned away when asked questions that could be perceived as politically controversial, such as “Is Xinjiang a good place?” or “Is it Tibet a good place?”
Draconian policies threaten slow China’s AI is advancing. Not only does it require sifting through data to remove politically sensitive information, it also requires investing enormous amounts of development time in creating ideological safeguards—safeguards that can still fail, as Kling illustrates.
From a user perspective, China’s AI regulations already lead to two classes of models: some hampered by intensive filtering and others significantly lessIs this really a good thing for the broader AI ecosystem?