- Korean Film News
- How Should We Use AI, and How Will We Take Responsibility?
- by KOFIC / Nov 26, 2025
Rapid Penetration of Generative AI and Emerging Legal Issues
Generative Artificial Intelligence (GAI) has been permeating nearly every area of society and culture at an astonishing pace. It is now difficult to find an industry that does not utilize AI at all—its presence has already become a fundamental part of daily life. In writing and search, image and video production, music composition and editing, and across the entire creative spectrum, AI’s influence is overwhelming. News organizations use AI to draft article outlines, advertising agencies automatically generate and refine slogans and marketing copy, and video creators receive AI-recommended cuts and automated corrections. In music, AI composers now produce tracks nearly indistinguishable from human-made melodies.
This transformation is happening quickly in Korea as well. CJ ENM produced a film using AI and won awards at domestic and international film festivals. AI dubbing and voice synth
esis are already emerging as new standards in advertising and video production. The 2024 Bucheon International Fantastic Film Festival even created an “AI Film International Competition” section, screening and evaluating films made with AI. On YouTube, short-form content produced with AI video-editing tools is proliferating rapidly.
Such expansion of generative AI is not merely a change in tools—it is restructuring the creative system itself. The film industry is a prime example. Traditionally, filmmaking was a complex system requiring specialized labor, high costs, and long timelines across idea development, scriptwriting, concept art, filming, editing, scoring, and distribution. Now, some production teams generate concept images through text prompts, rely on AI for shot suggestions and rough edits, and even create soundtrack cues or convert actors’ voices into different languages. An entire workflow—from planning to post-production—can now be reorganized through a single AI tool.

‘Bucheon Choice: AI Films’ – AI International Competition at the 2025 Bucheon International Fantastic Film FestivalKorean entries ‘Mold’ (left) and ‘Confession’ (provided by BIFAN)
However, behind this efficiency lies a vacuum of legal clarity. Many people wonder when and how copyright is created for AI-generated output, and who the rightful owner is. A basic framework is now beginning to emerge. In June 2025, the Korea Copyright Commission published the Guide to Copyright Registration for Works Generated Using Generative Artificial Intelligence, clarifying the legal principles of AI-generated works. According to the guide, a work automatically generated by AI does not receive copyright protection and cannot be registered, because copyright requires human creativity. However, there are exceptions:
①When the user inputs their own copyrighted work as a prompt and the resulting GAI output reflects that original creativity
② When the user adds creative modifications—such as edits, adjustments, or enhancements—to the GAI output
③ When the user exercises creativity in selecting, arranging, or composing GAI-generated elements
In these cases, human creative contribution is recognized, and copyright registration becomes possible. In short: an AI-generated result created solely via prompts has no copyright, but if the user adds their own creative touch afterward, that portion is eligible for copyright protection.
Since 2024, the Korea Copyright Commission has registered works such as the Korean AI film ‘AI Lady Suro’ as compilation works, recognizing that although the individual AI-generated elements—images, clips, sounds, dialogue—may not have copyright, the human creativity involved in selecting, arranging, and editing them grants copyright to the final compilation. Thus, even if the individual AI elements lack copyright, the completed film can be protected if human creative involvement is acknowledged.
Accordingly, films created using AI tools have various avenues for rights protection. Pure AI outputs with no human intervention remain ambiguous, but once meaningful creative editing, alteration, or refinement is added, copyright is naturally assigned to the human creator.

Korean AI Film ‘AI Lady Suro’ (provided by Nara Knowledge Information YouTube)AI Use and Legal Issues at Each Stage of FilmmakingAI is now present in almost every stage of filmmaking—from scriptwriting to synthesizing actors’ faces and voices, to organizing shots and editing scenes. Each stage introduces new legal questions.
First is the scriptwriting stage. Many writers now use language models such as ChatGPT, Claude, or NovelAI for ideas, information gathering, plot outlines, drafts, and dialogue. Using AI for ideas or reference is not legally problematic—ideas themselves are not protected by copyright, and writers remain responsible for verifying accuracy.
In 2023, the Writers Guild of America (WGA) went on strike, partly to restrict the use of AI for scriptwriting. The resulting agreement prohibits studios from forcing writers to use AI. In Korea, however, no such negotiations exist, and AI models do not block script generation, leaving decisions entirely to the writer.
A potential issue arises if AI generates dialogue that is excessively similar to existing works. Still, isolated similarities do not constitute infringement; substantial similarity across a significant portion of dialogue is required. To prove infringement of an entire script, even non-verbal elements—plot, structure, character relationships—must be considered. AI-generated scripts rarely reach this level of substantial similarity.
Thus, legal risks arise only if a writer delegates an entire script to AI without revision and the resulting script substantially overlaps with another. In reality, such cases are rare. Using AI to draft ideas or early versions, followed by human editing, is unlikely to cause immediate copyright issues.
Moving to production, actors’ likenesses and voices raise major concerns. In Hollywood, the Screen Actors Guild–American Federation of Television and Radio Artists (SAG-AFTRA) has protested the risk of AI replacing actors. More recently, the creation of AI actor ‘Tilly Knowwood’ by Dutch AI studio Xicoia has triggered strong backlash from SAG-AFTRA.

AI Actor ‘Tilly Knowwood’, protested by Hollywood actors (source: Tilly Knowwood Instagram)Legally, using AI to generate an actor’s likeness or voice poses several problems. If an AI generates an image identifiable as a real actor, this may infringe their right of publicity or portrait rights. It may also be considered a derivative work using copyrighted photos, performances, or recordings.
Therefore, AI-generated images or voices resembling real actors require explicit consent. Without it, commercial use may violate portrait rights, publicity rights, and even lead to criminal liability under unfair competition laws.
AI tools are also increasingly used in editing—color grading, cut selection, subtitles, voice cleanup, background compositing. Adobe Premiere Pro 2024 added generative AI features to assist editing, and many other tools offer similar functions. Using AI as a production tool does not inherently cause legal issues. However, if a final work relies almost entirely on AI with minimal human creativity, it may not qualify for copyright protection.
Film platforms and festivals may require disclosure when AI is used—either for generated scenes or editing. If AI-generated visuals resemble existing copyrighted works, infringement risks arise. As with all AI outputs, creators must review the final work to ensure no substantial similarity exists with prior works.

‘Run to the West’, Korea’s first AI-assisted feature film released on October 15 (provided by CJ CGV)The Future of Legal Disputes Surrounding AI
As AI enters the creative ecosystem, related legal issues continue to emerge. In the past, legal disputes in filmmaking centered around directors, screenwriters, and production companies. In the AI era, however, films now encompass complex legal questions involving data providers, algorithm designers, and technicians who use AI.
Future disputes will likely be extensions of those already present. The first issue is the legitimacy of AI training data. If it is revealed that films, scripts, videos, music, or images were collected without authorization to train large-scale models, responsibility may trace back to early stages of production. The recent lawsuit between Getty Images and Stability AI showed that although unauthorized use of training data was not judged as copyright infringement, it may involve trademark infringement and other rights. Another example is the case involving AI startup Anthropic, which allegedly used hundreds of thousands of copyrighted works without permission to train its chatbot 'Claude'; in September 2025, the company reached a settlement of 1.5 billion USD (about 2 trillion KRW) with authors. Such high-value settlements demonstrate that disputes over unauthorized data training are becoming increasingly common overseas. Korea is no exception—Korean AI companies have likely crawled films, trailers, and news videos for training, and similar lawsuits may arise in the future. If the dataset—a structured set of data used to train, verify, and test AI models—is unlawful, the legal stability of any film produced on top of it is also undermined.
The second issue concerns the legal protection of AI outputs. Current legal principles hold that works without human creative involvement are not considered “copyrighted works” under copyright law. Therefore, film studios cannot secure exclusive rights over scenes or music created solely by AI. To avoid such instability, overseas discussions have proposed models of “AI joint authorship,” which require minimal human involvement such as selecting, editing, or modifying results. Industrial practice may evolve into a dual structure where AI produces drafts and humans refine them to completion. Without the ability to prove such human involvement, rights become ambiguous, and investors or distributors may avoid a project due to legal risks.
The third issue concerns the expansion of moral rights and publicity rights. As AI-generated image and voice technologies advance, protection for identifiable celebrities will become increasingly reinforced. Existing legal frameworks already allow sufficient protection through portrait rights and publicity rights, but laws may eventually codify these protections from the data training stage to output generation. Actors, voice actors, and performers may require prior consent, separate compensation, and clearly defined usage limits—likely becoming international standards. Furthermore, using the face or voice of a deceased actor as a form of “digital legacy” could generate new disputes involving the honor of the deceased and the consent of heirs. Since publicity rights themselves can be inherited, related disputes are already possible under current law.
The film 'The Brutalist,' which won three Oscars this year despite controversy over AI-adjusted dialogue ahead of the Academy Awards (provided by Universal Pictures International Korea)The fourth issue involves platforms and film festivals potentially strengthening disclosure requirements and eligibility criteria. Labeling synthesized or generated scenes that could be mistaken for real imagery, and disclosing AI use during advertising review or rating classification, may become essential steps in distribution. Violating labeling rules could escalate beyond simple contract breaches into accusations of deception or violations of advertising regulations. On various OTT platforms, metadata indicating AI involvement may become a de facto requirement for content delivery.
These changes will fundamentally restructure the filmmaking ecosystem. A licensing market for AI training data may emerge, along with transparency and certification systems for AI usage, and technologically defined compensation structures for creators and performers. In the long term, an “AI copyright trust” system—similar to collective royalty management in the music industry—may be established to manage fees for AI training data. Ultimately, industries will need to follow evolving regulations and international guidelines.
For now, the law still lags behind technology. Generative AI evolves so quickly that new versions appear almost monthly, while copyright reforms and AI-related legislation remain tied up in procedural reviews and stakeholder conflicts. In the meantime, the film industry operates through de facto “self-regulation”: festivals require disclosure of AI use, production companies maintain internal usage logs, and some studios mark AI-assisted shots with “AI-processed cut” labels. Although such practices are not yet legal obligations, transparency and traceability are increasingly seen as measures of trust.
AI has already entered the language of cinema. If it is a tool that cannot be banned, then its use must be governed. The core dispute awaiting Korean cinema is not whether to use AI, but how to use it—and how to take responsibility for its use. Industry safeguards technology, law safeguards rights, and art safeguards meaning. If these three fall out of balance, AI will remain a machine of conflict rather than a tool for creation. But if balance is achieved, AI may instead expand the boundaries of film, becoming a new collaborator and offering human creators more opportunities to demonstrate originality. Throughout the history of art, technology has not replaced creativity but has evolved alongside it. The AI era will be no different. By legally and responsibly harnessing AI, we stand at a point where art can evolve through technology rather than be overshadowed by it.
By Jiwoo Jung (Lawyer and Cultural Critic, Author of ‘AI, Writing, Copyright’) (CLICK)
- Any copying, republication or redistribution of KOFIC's content is prohibited without prior consent of KOFIC.










