• NEWS & REPORTS
  • News
  • Korean Film News

Korean Film News

AI Shaking Up the Film Industry: From Breaking Feature-Length Barriers to Restructuring Production Costs

Aug 22, 2025
  • Source by KOFIC
  • View104

Current State and Outlook of AI Film Technology

 

 

 

 

<AWS Demonstrates Ad Spot Generation Solution Using AI Haiku>

The AI market has been growing rapidly in 2024. According to Statista, the global artificial intelligence market exceeded approximately 2,392 trillion KRW in 2024. This figure marks an increase of about 650 trillion KRW compared to 2023. If the trend continues, the market is expected to surpass around 10,738 trillion KRW by 2030. In addition, various experiments using AI for image and video production are underway across the industry.

[Core AI Technologies and Video Generation Technology]

With AI advancing at an accelerated pace, the film industry is also experimenting with the possibilities of using generative AI for movie production. At NAB, the largest broadcast equipment exhibition in the United States, 2024 showcased a wide range of cases related to AI-driven video production and workflows. In particular, solutions that automate labor-intensive and repetitive tasks such as video editing, subtitle generation, and dubbing were introduced, raising expectations for how AI will reshape the landscape of filmmaking in the future.

Among AI-based film production technologies, the most anticipated area is video generation. This involves AI producing videos based on prompts provided by users. OpenAI’s Sora gained attention with its generated videos, fueling much excitement. However, despite the progress of video generation AI such as Sora, Invideo, Runway, and Pica, current technology still struggles to move beyond short clips to create long-form, high-quality feature films. The key limitations lie in the breakdown of consistency across characters, images, and backgrounds between scenes, quality fluctuations, and computational constraints.

> LLM Technology and Video Generation Technology

LLMs, developed through the combination of deep learning and natural language processing, represent a core technology in AI. They have evolved from being text-centered to multimodal, encompassing images, video, and audio, thereby forming the foundation for video generation. With the introduction of new architectures such as Google’s Titan, which enhances memory efficiency, this evolution is expected to further accelerate the expansion of video generation.

AI-based video generation can be summarized around three main pillars: multimodality, diffusion, and patches. Multimodality is central to the technology, enabling AI to receive and process diverse types of data such as text, images, and sound. Diffusion models, meanwhile, function by starting from random noise and gradually generating high-quality video. OpenAI’s Sora, which became a major topic in video generation, employs diffusion models in its output process.

Patches serve as efficient representational units for analyzing visual data during encoding and decoding. Much like tokens in text, they are the minimal data units required for embedding. In Sora, patches are used when encoding visual data such as images and videos. Unlike conventional 2D pixels, these patches incorporate not only spatial information such as width and height, but also temporal information, allowing them to be processed as a 3D structure that integrates both space and time.

[ Prospects for AI in Film Production ]

> Will AI Replace Humans?

In the film industry and other media content sectors, generative AI has begun to challenge the creative works traditionally produced by humans, opening up new horizons of creativity. In 2024, AI-focused film festivals emerged worldwide. Korean director Kwon Han-seul presented One More Pumpkin at the Dubai International AI Film Festival, winning both the Grand Prize and the Audience Award. This marked a milestone for AI-driven content production both domestically and globally. On the other hand, concerns over job displacement remain strong. During what is often described as the first strike against AI—the 2023 Writers Guild of America strike—writers demanded that AI tools not be used to draft screenplay scripts.

At present, it is not possible to create film-quality content with just a few prompts. Even if clips are generated individually, maintaining consistency across the entire work is difficult, meaning that significant technological advances are still needed before AI can produce a fully completed film.

At NAB, Michael Cioni, CEO of Strada, introduced the concept of Utility AI. Unlike generative AI, Utility AI focuses on handling repetitive, low-value tasks. According to Cioni, this type of AI does not require the massive training costs that generative AI entails.

The biggest impact of AI lies in automating “familiar but low-value tasks,” making processes more productive. Conversely, in areas that are both unfamiliar and of low value, AI is less effective. From a utility perspective, AI is less about revolutionizing creativity itself and more about helping creative processes run faster and more reliably.

The area where AI has the greatest impact is in “familiar but low-value tasks.” Prioritizing the automation of such tasks to enhance productivity is the most rational approach. By contrast, if a task is both unfamiliar and of low value, AI’s effectiveness is limited. From a utility perspective, AI is better suited not to promise innovation in creativity itself, but to support existing creative processes, making them faster and more reliable.

> Workflow Innovations AI Will Bring to the Media Industry

1) Knowledge Management Systems (KMS)

From a practical standpoint, one of the tasks AI can take over from creators is the Knowledge Management System (KMS). Today, organizations are often trapped in knowledge silos. Information within teams or departments exists in isolation, cut off from one another. When the knowledge produced and managed by writers, directors, producers, art teams, filming crews, and production staff remains fragmented, inevitable communication costs arise.

At NAB 2024, Shailendra Mathur, Vice President and Chief Architect at AVID, introduced the DIKW (Data, Information, Knowledge, Wisdom) framework. He emphasized that with the help of AI, the Data and Information produced by each department can be transformed into Knowledge. When humans utilize this Knowledge, it can then evolve into Wisdom. To build such a KMS, knowledge needs to be structured through semantics and ontology, while LLMs can be leveraged so that all creators can access the information necessary for filmmaking. This process breaks down knowledge silos, ensuring that critical information flows freely across the organization, thereby enabling true innovation.

2) Opening New Markets Through Localization

VSI is a global media localization company. Scott Rose of VSI emphasized that localizing video content goes beyond simple language translation; it requires adapting content to cultural contexts, which is a crucial factor in monetizing media content in global markets. He noted that the rise of AI has made dubbing and translation easier by leveraging Automatic Speech Recognition (ASR), Text-to-Speech (TTS), and Speech-to-Text (STT) technologies.

Rose also explained through the “Content Value Spectrum” framework that AI localization technologies can be applied differently depending on content tiers. Short-form content on social media, as well as sports or radio content, can be mass-produced with minimal human involvement. However, regular programming and film content require active human quality control.

Films, in particular, are difficult to produce solely with AI and cannot be automatically localized because cultural context must be taken into account. Nevertheless, AI-powered localization is expected to create new market opportunities. Beyond simply translating speech into local languages, AI can also adjust lip synchronization to match the language, making localization technologies increasingly valuable.

3) Cost-Effective AI

On platforms such as YouTube and Netflix, free subscription or ad-supported models insert advertisements into content. The placement of these ads must occur at moments that do not disrupt immersion, and traditionally, humans had to manually review content to identify suitable insertion points.

At NAB 2024, AWS introduced a solution that addresses this challenge using its Haiku model. According to AWS, identifying ad insertion points for a 60-minute video using this system is expected to cost only $1–$2, which is remarkably efficient compared to manual labor costs.

At the same time, the very design of this solution highlights the importance of ideas in utilizing AI as a tool. Instead of simply requesting Haiku to analyze text, the workflow uses AWS Transcribe to convert film dialogue into transcripts, which Haiku then processes for contextual understanding. This process is further integrated with AWS Rekognition, which analyzes frames and shots, thereby increasing accuracy. The result is the automation of repetitive tasks in a cost-effective manner.

 


<Cowgirls on the Moon> Project Video

In the film industry, the most anticipated application of generative AI is in VFX. VFX combines both creative and labor-intensive repetitive tasks. At NAB 2024, AWS showcased the project Cowgirls on the Moon, presenting an example of VFX production using generative AI. Of particular interest is virtual production, a technique that replaces green/blue screens by installing large LED walls behind objects, allowing background images to be composited and rendered in real time during filming. AI demonstrated solutions for asset creation within this workflow.

<Krea’s Technology for Creating 3D Objects from 2D Images>

On January 17, 2025, Krea unveiled AI technology that converts 2D images into 3D objects. This represents a step beyond the 2.5D asset creation demonstrated by AWS. By training AI on various 2D images and combining them to generate 3D objects, this technology is expected to be applied across virtual reality, 3D assets, restoration, and other fields.

The detailed report, KOFIC Current Issues Report 2025-01: Status and Outlook of AI Film Technology, is available on the KOFIC Policy Research bulletin.

Written by Lee Yoon-woo, Film Technology Infrastructure Team, Korean Film Council (KOFIC)


Republication, copying or redistribution by any means is prohibited without the prior permission of KOFIC and the original news source.
  • SHARE instagram linkedin logo
  • SUBSCRIBE
  • WEBZINE