- The Abacus
- Posts
- Creators Sue LLM in China, Can They Win?
Creators Sue LLM in China, Can They Win?
PLUS: Huawei HarmonyOS NEXT system; Samsung generative AI phone
The Abacus will be updated with new versions three times per week, starting today. You can expect the newsletter in your inbox every Monday, Wednesday, and Friday. Interested in unlocking Asia's AI news? Sign up here!
Last week, we discussed how China protects AI-generated images. (I highly recommend reading this post to get a more comprehensive understanding of China's legal stance.)
Meanwhile, four artists have sued Xiaohongshu for training their image-generating LLM, Trik AI, using their works without permission. This lawsuit, filed at the Beijing Internet Court, is the first in China where a large language model and its operator are the defendants. Following this lawsuit, updates to Trik AI have ceased.
For today’s post, I spoke with Zhixin Yi, a seasoned intellectual property lawyer and partner at the Anjie Law Firm's Shanghai office. He provided numerous insightful perspectives, and I really hope this Q&A session will help you grasp the operational logic of China's legal system.
As an expert in the field of intellectual property, Zhixin has worked with many Fortune 500 companies, covering industries like luxury,entertainment and so on.
This interview has been translated and edited for clarity.
Jinpeng Li: As an intellectual property lawyer, how do you perceive the advent of generative AI such as ChatGPT and Midjourney? Do you believe these innovations are poised to bring shocks to the legal framework?
Yi Zhixin: The impact is definitely there, but it's too early to say if it's a “shock”. Generative AI, like ChatGPT, is a recent technology for the public, but current laws can still apply to cases involving it. Laws don’t always catch up quickly to technology. For example, think back to the early days of the internet industry, I helped an internet company with an unfair competition case. We had to rely on general legal principles, such as conducting business honestly to handle cases with no precedents. As more cases popped up and the internet became a bigger part of life, courts from the local level all the way up to the Supreme Court started to see patterns and developed specific rules. Eventually, these got turned into formal laws.
Based on the case that China protects the copyright of AI-generated images, what can we infer about the court's attitude and stance towards AI?
Yi Zhixin: The main point of this case was about the role humans played in the creative process with AI. There's a debate whether to see AI as just a tool or as part of the creation. The Beijing Internet Court treated AI as a tool in this case, meaning the real creative input came from the person using the AI. This decision is sensible, especially for new technology, as it keeps things orderly. It respects existing laws while also encouraging people to try using AI. This is important for the future of AI technology.
Talking back to the case of the illustrator suing Xiao Hong Shu's Trik AI, the core issue actually is whether training a LLM using copyrighted materials, whether images or text, without permission, constitutes infringement.
Yi Zhixin: If a LLM uses datasets without the creator's authorization, it definitely constitutes infringement because it doesn't follow the creator's consent. However, some software now includes this in their terms & conditions, stating that anything created by users may be used to train LLMs. If you want to use these products, you have no choice but to accept these terms.
Does this amount to the company forcing people to agree?
Yi Zhixin: It all comes down to the platform's specific user terms. If a user doesn't want their content used for training and just loses access to some features, that's usually okay. It's about more than just legal stuff; it's also about business choices. You can always switch to a different platform if you don't like the terms. But if a product dominates the market (like over 90% share) and you have no other options, then it's not fair for the platform to enforce such terms on users.
In the realm of copyrights, what makes large language model infringements unique compared to other copyright cases?
Yi Zhixin: I believe core hasn’t changed. Think of large language models as tools, like a search engine. Just like a search engine isn't directly responsible for every book or movie it finds, these AI models aren't automatically at fault for what they process. The real issue lies with the companies running them. It's like if a video player was used to spread illegal content; the problem isn't the player, but the company behind it. We have to look at both the tool and who's using it.
What documents would a court ask for to decide if a LLM has violated a creator's copyright?
Yi Zhixin:In China, the rule is that if you make a claim, you have to prove it. So, if I say my work was copied, I need to show evidence of where and how it was used by someone else. To prove that a language model used a creator's work, one would have to recreate the steps and demonstrate that the language model's output is very similar to their original work.
As a creator who is concerned about their work being used to train LLMs without consent, what steps can one take to protect their rights and potentially seek financial compensation? Additionally, do you foresee an increase in similar lawsuits in the future?
Yi Zhixin: If someone uses your work without permission to train a big AI model, it's considered infringement in China. You can ask them to stop using your work and pay for any losses, which are usually based on how much your work is worth. We'll probably see more cases like this in the future.
But for creators, the hard part is proving it. If a company uses your work quietly and it's just a tiny part of a huge database, showing that the AI's output is too similar to your original work can be tough, and it's a real challenge for the judges to figure out.
If the creator loses this case, should we turn to moral debates more to ensure a world that respects and protects original works?
Yi Zhixin: AI technology and copyright law have a tricky relationship, and moral debates alone aren't enough to solve the problems. But, platforms can work differently with creators, like through contracts or licenses. For example, think of how music streaming services work.
Like what OpenAI did? They paid publishers like Politico and Business Insider for their content.
Yi Zhixin: Yes.
Asia Must Reads
Huawei unveils HarmonyOS NEXT, accelerating the move away from Android
©Huawei
Driving the news: The new operating system will be launched for commercial use in the fourth quarter of this year. HarmonyOS Next, which is a big upgrade from Huawei's previous mobile system, won't work with apps made for Android anymore.
Why it matters: Since its launch in August 2019, Huawei now has a clear timeline to move away from Android. Huawei aims for HarmonyOS Next to initially cover the top 5,000 apps that make up 99% of users' time. Currently, over 200 companies have become the first batch of partners for developing native applications for Huawei.
Samsung generative AI phone
@Samsung
Samsung recently introduced the 2024 version of its flagship Galaxy smartphone series in San Jose. The devices are equipped with the latest AI-capable Snapdragon processor from Qualcomm, as well as Samsung's own Galaxy AI software and services. (Song Jung-a / FT)
Driving the news: Samsung introduces a suite of AI-enabled features based on Google Genie, including live-translate phone calls, transcription of voice recordings, video search, and photo editing.
Why it matters: Samsung needs to find ways to compete with Apple, and phone makers are exploring the potential of adding AI features to their devices.
Bonus
Samsung just launched a prototype of their latest smart wearable device, the Galaxy Ring. But not public when the device is coming out or what it might cost. (Jay Peters / The Verge)
©Samsung
China’s industry ministry has issued draft guidelines to standardize the AI industry. The draft said China aims to participate in forming more than 20 international standards for AI by 2026.
The Philippines plans to propose a legal framework for AI regulation to the Association of Southeast Asian Nations (ASEAN), based on the country’s own draft legislation, when it chairs the bloc in 2026. (Martin Petty / Reuters)
The International Monetary Fund (IMF) says AI will affect almost 40% of global jobs. Advanced economies may have about 60% of jobs affected, more than emerging and low-incoming countries.
Recent from The Abacus
Microsoft won't shut down the Beijing AI Lab
China Protects AI-Generated Image Copyright
The Abacus's 2024 Predictions: AI, TikTok, US-China relationship and more
Reply