
01/01/2023
Sunday
Overview
For my submission, I’m sharing Krungthep 2.0, a short AI-assisted animation that’s part of an ongoing series exploring how machine vision imagines the future of cities. This piece focuses on Bangkok, my home city, and uses AI as both a tool and a lens to reconstruct speculative urban landscapes based on digital memory. I wanted to explore what happens when a city is reinterpreted by algorithms trained on public data, media saturation, and fragmented visual culture. What does the future look like when it’s not shaped by people, but predicted by machines? In this project, I used Midjourney to generate imagined cityscapes of Bangkok. From there, I made a personal creative rule to rely only on AI-driven processes to build the visuals. I took those static images and used RunwayML’s camera control feature to give them depth and movement. RunwayML wasn’t the only AI tool I used, but it played a pivotal role in speeding up the animation process. I could have spent hours modeling scenes in Blender or animating frame by frame, but I chose a method that felt more efficient and aligned with the concept. The goal was not realism, but immersion, and I wanted to create the feeling of stepping into an AI’s imagined world. Once I had those motion-augmented clips, I used Luma3D to generate Gaussian Splat reconstructions. I chose Luma3D because of what I learned from an earlier project, Latent Space. During that experiment, I noticed that Luma3D often produced fragmented, unstable environments that felt like half-formed memories. Instead of avoiding that limitation, I leaned into it. The glitchy geometry and incomplete renderings felt symbolic of how AI perceives space. It became a visual metaphor for how speculative and disconnected AI-driven futures can be. After generating the splats, I manipulated them directly by editing the motion, opacity, and color of the individual points through custom code. Although I realized later that I could have used Unreal Engine, which supports Gaussian Splatting more smoothly, I chose Unity instead because I wanted to challenge myself and better understand how splat files are structured and rendered. I designed a dark, empty scene with deep red tones to represent a digital abyss. For me, that space captures how I imagine AI vision: unanchored, speculative, and shaped by fragmented data instead of lived experience. For the sound, I composed the audio using a mix of sound presets and procedural sound generators. I wanted the sound to echo the eerie, synthetic tone of the visuals and create a sense of quiet tension throughout the environment. The result is an ambient experience that is more about atmosphere than narrative, and more about suggestion than resolution.

Final Workflow
Introduction | Dreaming New Worlds with AI
Now that we’ve got VR goggles, generative AI that can transform videos in real time, and even tools like AI-assisted Gaussian splatting, you have to wonder - are we actually getting closer to building our own Ready Player One?
It was this idea that first captured my curiosity. The idea of stepping into fully AI-generated worlds where everything from the landscape to the story could shift in real time was just too wild not to chase. What if the future of virtual reality isn’t just about better graphics, but about entire environments dreamed up by machines? That curiosity kicked off this project, a deep dive into how today’s AI tools are already shaping the way we see, build, and maybe even live in virtual worlds.
Concept
From there I set out to build a project that harnesses generative AI tools to construct a virtual world, where I serve as the curator, guiding the vision, while the AI assumes the role of the creator, shaping form, texture, and motion. I was equally drawn to investigating efficient pipelines that could refine and accelerate the animation process, reducing friction between ideation and execution.
Below is a step-by-step overview of the process undertaken....

Literally AI Art. Created with Midjourney
...
Plan
Taking Golan Levin’s Generative AI class, I explored a range of cutting-edge tools, starting with Midjourney for image generation, then RunwayML for video, and eventually ComfyUI, an open-source workflow that gave me full creative control. Through this, I learned how models like LoRAs and GANs shape visual outputs. I also explored Luma3D, which blends real-world footage with digital environments. For my final project, I plan to leverage all of these tools with my new skills in Unity to build an immersive, AI-generated world. Below is a timeline of what I plan to do.
Tools I Used



Midjourney workflow (on discord)
ComfyUI workflow (by far the most complicated)
RunwayML workflow
Planned Methodology
Scene
Generation
Use Midjourney to generate images of the scenes I want to render
Image to Video Converter
Upload a Midjourney image into Runway ML to convert it into a video. This will allow LUMA3D to analyze the scene better
Video
Stylization
I figured it would be interesting to use multiple layers of AI hallucination by repeatedly feeding the product into another AI tool
Video to Gaussian Splat Converter
Use LUMA3D (easiest workflow), upload the video into the application and choose convert to Gaussian Splat
Splat
Edit
Edit the Gaussian Splat with Super Splat. Splats from LUMA3D will have a sphere lighting environment around.
Done
Have Fun!
explore the interact scene. Move around the gaussian splat
Download a Character Camera
To move the character around the scene and explore, use a kinematic camera controller
Combine Splat Scenes
It would be interesting to combine all the exported splats together in a scene, and make your own AI environment
Choose Splat
Renderer Mode
Choose between point cloud or Gaussian Splat render mode. Point cloud gives the main points that the NERF analyzed from the space.
Download Gaussian Splat Workflow
Unity or Blender does not have a built-in Gaussian Splat viewport / renderer. It's important to download this.
Mood board



Works by J. Henny, 3D animator and concept artist --->



Works by Sujin Kim, 3D artist --->

Death Stranding

Cyberpunk 2077

No Man's Sky
Inspo.
I’m drawn to envisioning spaces that once were, or those that might one day be. There’s something compelling about fragmented, unfinished environments; their incompleteness suggests memory, loss, or something not yet fully realized. That’s why I use Luma3D to create point cloud splats, embracing their raw, glitchy surfaces. When paired with reference imagery generated by AI, these spaces begin to feel detached and dystopic, echoing how machines might perceive and reconstruct our world, not with emotion, but with cold approximation.
Process & Interactive Display
Unity Rendering & Editing

I animated the debug points tand splat o create a dynamic and fragmented aesthetic that mirrors the disjointed nature of machine perception. The shifting motion adds a sense of instability and life, transforming what would otherwise be static data into an active visual experience that reflects the unpredictability and abstraction of digital interpretation.
Steps to Animate Points:
Steps to Animate Splats:
-
navigate to project hierarchy
-
click on gaussian splat object file
-
open drop-down menu called Resources in the inspector
-
navigate to debug points renderer
-
locate the vert function in the .debug points code
-
change it depending on how you want to animate your points.
-
navigate to project hierarchy
-
click on gaussian splat object file
-
open drop-down menu called Resources in the inspector
-
navigate to shader point renderer
-
locate the vert function in the .shader code
-
change it depending on how you want to animate your points.
Reflection
This project explored how generative AI tools can be used together to build immersive, animated environments. I began by using Midjourney to generate conceptual imagery based on textual prompts, then brought those scenes to life in three dimensions using Luma3D’s spatial reconstruction capabilities. RunwayML helped refine the motion and atmosphere of the final animation, creating a fluid and cinematic sequence. Each tool contributed a layer of interpretation, and together they allowed me to move fluidly between imagination and execution in a way that traditional pipelines wouldn’t have allowed.
Working with generative AI felt less like automation and more like co-creation. These technologies didn’t take over the creative process — they expanded it. Rather than replacing the artist’s role, they opened up space for experimentation, iteration, and new visual possibilities. I found that using AI in art can be a powerful way to challenge conventional workflows and invite unexpected results. While there are valid concerns surrounding authorship and originality, I believe that when used thoughtfully, generative tools can support and even strengthen an artist’s voice.