Navigation
Search
|
Mirage AI Instantly Transforms Live Video With Just a Text Prompt
Friday July 18, 2025. 06:53 PM , from eWeek
A new artificial intelligence tool, Mirage, can transform live video in real-time. With just a text prompt, users can change their stream’s visual style into anime or cyberpunk.
Mirage is the product of the Israeli startup Decart, which claims it is the “first system to achieve infinite, real-time video generation with zero latency.” It is powered by MirageLSD, the company’s live stream diffusion AI video model, which generates 20 frames per second at 768×432 resolution. “Real-time generation requires each frame to be produced in under 40 milliseconds in order to not stand out to the human eye,” the Decart team wrote in a blog post. Decart’s video demos showcase its AI transforming Minecraft gameplay into a snowy wonderland, immersing Call of Duty players in a serene meadow of pink trees, and reimagining One Direction’s ‘What Makes You Beautiful’ in a variety of anime styles. Each frame is generated sequentially, using both a text prompt and the previous frame as input, in what’s known as a causal, autoregressive approach. The tool could feasibly be used for live streaming on the likes of Discord or TikTok, video calls, as well as normal playback of pre-recorded TV shows, films, and clips. Decart hopes to introduce support for full HD and 4K in the future. How real-time AI transformation was achieved Competitor video-to-video AI tools have struggled with livestreams primarily due to the auto-regression models they use. Each new frame is generated using information from the previous frames, which means that it inherits any errors. After about 30 seconds of footage, these accumulated errors seriously degrade the video quality. Mirage addresses the issue with two innovations. First, diffusion forcing trains the model to clean up noisy frames without needing full context, enabling accurate frame-by-frame generation. Second, history augmentation teaches it to recognise and fix errors from its own past outputs, so it learns to do the same during generation and prevent quality drift over time. Typical AI video-to-video tools are also often slow, requiring minutes of processing for just a few moments of output. With Mirage, there is only about a 100-millisecond delay between input and output, including processing time and other system overheads. Decart has achieved this by reducing the steps required to generate each frame and trimming the model to run more efficiently. The system is specifically tuned for NVIDIA Hopper chips using custom low-level GPU code, which improves speed and minimizes latency. Streamers using tools like Mirage to transform their YouTube videos in real time don’t need to worry; the platform says AI content is fine as long as it’s not “inauthentic.” The post Mirage AI Instantly Transforms Live Video With Just a Text Prompt appeared first on eWEEK.
https://www.eweek.com/cloud/decart-mirage-ai-live-video-rea-time-changes/
Related News |
25 sources
Current Date
Jul, Sat 19 - 06:44 CEST
|