top of page

  Latent Diffusion / 3D Scene Capture / Midjourney / Runcomfy UI / RunwayML

KRUNGTHEP 2.0

Duration

Spring 2025

1 month

RESEARCH & EXPERIMENTATION

Tools

Midjourney, RunwayML, Luma3D, Unity, Visual Studio Code

Overview

For my submission, I’m sharing Krungthep 2.0, a short AI-assisted animation that’s part of an ongoing series exploring how machine vision imagines the future of cities. This piece focuses on Bangkok, my home city, and uses AI as both a tool and a lens to reconstruct speculative urban landscapes based on digital memory. I wanted to explore what happens when a city is reinterpreted by algorithms trained on public data, media saturation, and fragmented visual culture. What does the future look like when it’s not shaped by people, but predicted by machines? In this project, I used Midjourney to generate imagined cityscapes of Bangkok. From there, I made a personal creative rule to rely only on AI-driven processes to build the visuals. I took those static images and used RunwayML’s camera control feature to give them depth and movement. RunwayML wasn’t the only AI tool I used, but it played a pivotal role in speeding up the animation process. I could have spent hours modeling scenes in Blender or animating frame by frame, but I chose a method that felt more efficient and aligned with the concept. The goal was not realism, but immersion, and I wanted to create the feeling of stepping into an AI’s imagined world. Once I had those motion-augmented clips, I used Luma3D to generate Gaussian Splat reconstructions. I chose Luma3D because of what I learned from an earlier project, Latent Space. During that experiment, I noticed that Luma3D often produced fragmented, unstable environments that felt like half-formed memories. Instead of avoiding that limitation, I leaned into it. The glitchy geometry and incomplete renderings felt symbolic of how AI perceives space. It became a visual metaphor for how speculative and disconnected AI-driven futures can be. After generating the splats, I manipulated them directly by editing the motion, opacity, and color of the individual points through custom code. Although I realized later that I could have used Unreal Engine, which supports Gaussian Splatting more smoothly, I chose Unity instead because I wanted to challenge myself and better understand how splat files are structured and rendered. I designed a dark, empty scene with deep red tones to represent a digital abyss. For me, that space captures how I imagine AI vision: unanchored, speculative, and shaped by fragmented data instead of lived experience. For the sound, I composed the audio using a mix of sound presets and procedural sound generators. I wanted the sound to echo the eerie, synthetic tone of the visuals and create a sense of quiet tension throughout the environment. The result is an ambient experience that is more about atmosphere than narrative, and more about suggestion than resolution.

Final Workflow

Introduction | Dreaming New Worlds with AI

Now that we’ve got VR goggles, generative AI that can transform videos in real time, and even tools like AI-assisted Gaussian splatting, you have to wonder - are we actually getting closer to building our own Ready Player One?

It was this idea that first captured my curiosity. The idea of stepping into fully AI-generated worlds where everything from the landscape to the story could shift in real time was just too wild not to chase. What if the future of virtual reality isn’t just about better graphics, but about entire environments dreamed up by machines? That curiosity kicked off this project, a deep dive into how today’s AI tools are already shaping the way we see, build, and maybe even live in virtual worlds.

Concept

From there I set out to build a project that harnesses generative AI tools to construct a virtual world, where I serve as the curator, guiding the vision, while the AI assumes the role of the creator, shaping form, texture, and motion. I was equally drawn to investigating efficient pipelines that could refine and accelerate the animation process, reducing friction between ideation and execution.

Below is a step-by-step overview of the process undertaken....

aiart_edited.jpg

Literally AI Art. Created with Midjourney

...

Plan

Taking Golan Levin’s Generative AI class, I explored a range of cutting-edge tools, starting with Midjourney for image generation, then RunwayML for video, and eventually ComfyUI, an open-source workflow that gave me full creative control. Through this, I learned how models like LoRAs and GANs shape visual outputs. I also explored Luma3D, which blends real-world footage with digital environments. For my final project, I plan to leverage all of these tools with my new skills in Unity to build an immersive, AI-generated world. Below is a timeline of what I plan to do. 

Tools I Used

Midjourney workflow (on discord)

ComfyUI workflow (by far the most complicated)

RunwayML workflow

Planned Methodology

Scene
Generation


Use Midjourney to generate images of the scenes I want to render

Image to Video Converter 

Upload a Midjourney image into Runway ML to convert it into a video. This will allow LUMA3D to analyze the scene better 

Video
Stylization


I figured it would be interesting to use multiple layers of AI hallucination by repeatedly feeding the product into another AI tool

Video to Gaussian Splat Converter

Use LUMA3D (easiest workflow), upload the video into the application and choose convert to Gaussian Splat

Splat

Edit


Edit the Gaussian Splat with Super Splat. Splats from  LUMA3D will have a sphere lighting environment around. 

Done

Have Fun!

explore the interact scene. Move around the gaussian splat 

Download a Character Camera 


To move the character around the scene and explore, use a kinematic camera controller

Combine Splat Scenes

It would be interesting to combine all the exported splats together in a scene, and make your own AI environment

Choose Splat

Renderer Mode


Choose between point cloud or Gaussian Splat render mode. Point cloud gives the main points that the NERF analyzed from the space.

Download Gaussian Splat Workflow

Unity or Blender does not have a built-in Gaussian Splat viewport / renderer. It's important to download this. 

Mood board

jhennyart-ruins4.jpg
jhennyart-sand.jpg

Works by J. Henny, 3D animator and concept artist --->

c-SujinKim__brokenhand003_1684040278161.jpg
c-SujinKim__EchoesofSorrowIII_1684040166321.jpg
c-SujinKim__EchoesofSorrowII_1684040166319.jpg

Works by Sujin Kim, 3D artist --->

image.png

Death Stranding

ss_872822c5e50dc71f345416098d29fc3ae5cd26c1.1920x1080.jpg

Cyberpunk 2077

image-4.png

No Man's Sky

Inspo.

I’m drawn to envisioning spaces that once were, or those that might one day be. There’s something compelling about fragmented, unfinished environments; their incompleteness suggests memory, loss, or something not yet fully realized. That’s why I use Luma3D to create point cloud splats, embracing their raw, glitchy surfaces. When paired with reference imagery generated by AI, these spaces begin to feel detached and dystopic, echoing how machines might perceive and reconstruct our world, not with emotion, but with cold approximation.

Process & Interactive Display

Unity Rendering & Editing

I animated the debug points tand splat o create a dynamic and fragmented aesthetic that mirrors the disjointed nature of machine perception. The shifting motion adds a sense of instability and life, transforming what would otherwise be static data into an active visual experience that reflects the unpredictability and abstraction of digital interpretation.

Steps to Animate Points:

Steps to Animate Splats:

  1. navigate to project hierarchy

  2. click on gaussian splat object file 

  3. open drop-down menu called Resources in the inspector 

  4. navigate to debug points renderer

  5. locate the vert function in the .debug points code 

  6. change it depending on how you want to animate your points. 

  1. navigate to project hierarchy

  2. click on gaussian splat object file 

  3. open drop-down menu called Resources in the inspector 

  4. navigate to shader point renderer

  5. locate the vert function in the .shader code 

  6. change it depending on how you want to animate your points. 

Animate Points Code

import json
import re
import pandas as pd

# Load the JSON file
file_path = "/Users/thipopp21/Desktop/run_results.json"
with open(file_path, "r") as f:
    data = json.load(f)

# Extract the list from the JSON file
image_list = data.get("selection3", [])  # Ensure it retrieves the correct list

# Function to extract the year from the title
def extract_year(text):
    match = re.search(r"\b(18\d{2}|19\d{2}|20\d{2})\b", text)  # Match years from 1800–2099
    return int(match.group(0)) if match else None  # Return year as an integer or None if not found

# Process the list to extract years
sorted_images = [
    {
        "image": item["image"],
        "title": item["selection4"],
        "year": extract_year(item["selection4"])
    }
    for item in image_list
]

# Remove entries where no year was found
sorted_images = [img for img in sorted_images if img["year"] is not None]

# Sort the list by year (oldest to newest)
sorted_images.sort(key=lambda x: x["year"])

# Save the sorted list to a new JSON file
sorted_file_path = "/Users/thipopp21/Desktop/sorted_images.json"
with open(sorted_file_path, "w") as f:
    json.dump(sorted_images, f, indent=4)

# Convert to DataFrame and save as CSV
df = pd.DataFrame(sorted_images)
df.to_csv("sorted_images.csv", index=False)

print(f"Sorted images saved to {sorted_file_path}")
 

AI visualization process

Reflection

This project explored how generative AI tools can be used together to build immersive, animated environments. I began by using Midjourney to generate conceptual imagery based on textual prompts, then brought those scenes to life in three dimensions using Luma3D’s spatial reconstruction capabilities. RunwayML helped refine the motion and atmosphere of the final animation, creating a fluid and cinematic sequence. Each tool contributed a layer of interpretation, and together they allowed me to move fluidly between imagination and execution in a way that traditional pipelines wouldn’t have allowed.

 

Working with generative AI felt less like automation and more like co-creation. These technologies didn’t take over the creative process — they expanded it. Rather than replacing the artist’s role, they opened up space for experimentation, iteration, and new visual possibilities. I found that using AI in art can be a powerful way to challenge conventional workflows and invite unexpected results. While there are valid concerns surrounding authorship and originality, I believe that when used thoughtfully, generative tools can support and even strengthen an artist’s voice.

bottom of page