top of page

Animation, Generatiev AI

BEFORE YESTERDAY

Duration

Summer 2025

1 week

ANIMATION

Tools

RunwayML, Midjourney, Premiere Pro, After Effects

Overview

This project is an AI-driven reconstruction of memory, built in honor of my late grandparents. I curated a series of images that resembled the kind of Bangkok they used to describe during our late-night dinner conversations. It reflects the version of the city I grew up with, before major gentrification and the influx of tourism. I imported these images into RunwayML and animated them to bring those moments to life.

I then combined the animated footage with a generated background created in Midjourney and edited everything in Adobe After Effects. The background shows an old TV and living room, modeled after my grandfather’s apartment. I found a photo of that space in an archive and used it as a base to generate an AI version. I chose a retro television as the frame because I remember having one in his living room as a kid.

This work is not about the future. It is about preserving a feeling, a song, and a city as they once were.

Collage of reference images used for this project

Planned Methodology

Image 

Curation

Selected images that resembled the Bangkok environemnt I wanted to illustrate

Image to Video Converter 

Uploaded images into RunwayML and animated with Gen-4 image reference. 

Video
Stylization


Imported videos into Adobe Premiere Pro, edited and colorgraded them. 

TV Scene Generation


Generated an Image of the living room with the TV using the image-blend tool in Midjourney Composited and edited the video into the TV screen. 

Final Editing on RunwayML 

Imported the edited video into RunwayMl. Added effects to enhance mood of video. 

To create the scenes on Runway, I used Runway Gen-4’s reference image-to-video feature combined with text prompts. I kept a fixed seed and locked the camera for consistency across select shots. My prompts emphasized a cinematic feel with warm vintage lighting, using “shot on PXW” to simulate the texture and depth of Sony’s professional camera look. This approach helped preserve the mood and composition of the reference while letting the AI animate subtle motion in a natural, filmic way.

I recorded my experiences working with RunwayML Gen4's tool below: 

Strengths: 

  • High Visual Realism: Scenes closely resemble the reference images, with improved accuracy in textures, lighting, and overall composition.

  • Fluid Motion: Movement is significantly smoother, avoiding the jitteriness seen in previous versions.

  • Dynamic Camera Work: Supports more complex camera motions like pans, zooms, and tracking shots with a cinematic feel.

  • Better Temporal Consistency: Frames remain visually coherent throughout, reducing flickering or drastic visual jumps.

  • Improved Subject Anatomy: Human figures and objects retain form and proportion more reliably across motion.

  • Natural Gestures and Expressions: Subtle movements such as blinking, walking, or shifting body weight are rendered with more nuance.

  • Enhanced Lighting Effects: Responds well to prompts involving specific lighting moods like “warm vintage” or “golden hour.”

  • Great for Storytelling: The combination of realism and fluidity makes it ideal for narrative video projects or stylized cinematics.

Weaknesses: 

  • Struggles with Complex Scenes: Scenes involving multiple people or dense environments were harder to control and often produced inconsistent results.

  • Imprecise Hand and Limb Movement: The model still has difficulty accurately rendering detailed hand gestures or fine motor actions.

  • Limited Control: Users can't yet fully regulate what parts of the image to change or preserve, making fine-tuning outputs challenging.

  • Unpredictable Edits: Small changes in prompt wording or reference images can sometimes lead to disproportionate visual shifts.

  • Temporal Drift in Longer Clips: In sequences with more than a few seconds of movement, consistency in character identity or layout can start to degrade.

  • Over-smoothing of Motion: In some cases, the fluidity comes at the cost of sharpness or clarity in fast or complex motions.

  • Lack of Interactivity: There’s no native way to keyframe or guide specific character or object trajectories across time.

Workflow

Screenshot 2025-05-19 183024.png

Initial Editing and Compositing on Premiere Pro 

Screenshot 2025-05-19 183131.png

Added TV - effects on After Effects with composited footage

Screenshot 2025-05-19 183151.png

Composited with image of TV and added footage, used mask layer as it was a still shot (no need for tracking) 

Screenshot 2025-05-19 183239.png

Final touch-ups on RunwayML to speed up render time

bottom of page