AI FILMMAKING

This new body of work explores what happens when filmmaking meets AI generation.

Instead of directing a live action shoot, I write a brief and the corresponding prompts needed to generate the people, performances, and worlds, then take that raw material into post: editorial rhythm, sound design, grading, typography, and mix. The result sits between spec commercial and art film — polished, cinematic, and uncanny.

Each piece, a mirror a of place, tone, and character that unfolds entirely through AI direction, art direction, and dialogue.

These are experiments in authorship without a crew — an evolving practice in which creative intent, machine interpretation, and human curation blur into one continuous process.

latent space

Diffractals: Visualizing the Fourier Transform through Synthetic Media

In the physics of wave optics, the Fraunhofer diffraction pattern formed by an aperture is the physical manifestation of its Fourier transform. When light propagates through a fractal geometry—a structure exhibiting self-similarity at every scale—the resulting scattering pattern retains that recursive complexity. The light does not simply bend; it iterates.

This film visualizes this "diffractal" phenomenon through a hybrid workflow of synthetic media and traditional compositing. Utilizing AI-generated footage and avatar synthesis as raw texture, the final composition was layered and sequenced in Adobe After Effects to mimic the behavior of coherent light passing through complex masks.

The audio production mirrors these optical principles. The soundscape, composed and mixed in Logic Pro, treats the human voice as a waveform subject to interference. Two distinct vocal tracks are panned and temporally offset, creating an auditory interference pattern. This technique forces the listener to resolve the signal from the noise, creating peaks of clarity (constructive interference) and valleys of dissonance (destructive interference) akin to the bright and dark fringes of a diffraction pattern.

Tools & Techniques:

  • Visuals: Generative AI Video, Avatar Synthesis, Adobe After Effects

  • Audio: Logic Pro (Composition & Mixing), Dual-Channel Panning, Temporal Offset

  • Concept: Fourier Optics, Fractal Geometry, Recursive Signal Processing

The Algorithmic Bone / Stochastic Lattice

This piece is a quintessential example of Generative Design and Biomimicry, representing a synthesis of algorithmic complexity and structural logic.

"This is not just a table; it is a frozen algorithm. It replaces the traditional architectural concept of 'columns and beams' with a biological concept of 'cellular aggregation,' blurring the line between the grown and the made."

LEVIS - UNDERWATER

A visual study in suspension — body, breath, and denim.

Generated entirely in Midjourney, composed, scored, and edited by me.

UNDERWATER reimagines Levi’s as something elemental — not worn, but inhabited.
Figures drift and twist beneath a single beam of light, dancing as if memory itself were trying to surface.
Their movements feel both human and spectral — a slow motion prayer between collapse and release.

The result sits somewhere between spec commercial and art film — a moody, modern elegy where fashion becomes metaphor, and AI becomes the lens of empathy.

CARHARTT - DON’T JUDGE

A three-part spec campaign exploring identity, perception, and the stories we assume.

Each film was generated entirely in Sora 2 — character, dialogue, direction, performance — then finished through my post-production process: edit, grade, sound design, and title system.

The series pairs rugged archetypes with unexpected intellect:
men on a city bench debating charcuterie,
two porch-bound neighbors dissecting quantum computing,
a lone mechanic under a car reflecting on Jung’s anima.

It’s not parody — it’s empathy by inversion.
The work asks what happens when blue-collar exteriors hold philosophical interiors — when we stop judging by surface and start listening.

Each film ends with the same truth:
Don’t Judge.

CHARCUTERIE

QUANTUM

JUNG

BRIAN CAIAZZA HYPE REEL

Either I’ve known these people, or I am them.
Creative empaths absorb the world and give it back….

This piece is an exercise in direction, not technology.

At its core, the work explores archetype, voice, and juxtaposition—tools I’ve used my entire career. The characters you see are not random. Each one represents a lived human type: people I’ve known, observed, worked alongside, or absorbed through culture, music, and environment. In many ways, each character is a fragment of myself, refracted through different social lenses.

That understanding of people—how they speak, move, posture, and carry meaning—is the foundation of directing. Casting is never just about appearance; it’s about psychological truth. The unexpected pairing of voice and message is intentional. It creates tension, reveals bias, and exposes assumptions the viewer didn’t realize they were bringing with them.

The role of AI here is instrumental, not conceptual.

I used generative tools the same way I would use a camera, a casting call, or a sketchbook: to externalize ideas quickly and test directions in real time. The technology collapses distance between instinct and execution, allowing creative decisions to surface faster—but it does not author the work. Authorship remains human.

What excites me about this moment is not novelty, but throughput. I can now sit with a problem and manifest multiple credible futures immediately—pressure-testing tone, character, and narrative before committing. This enables sharper judgment, not looser thinking.

This piece reflects how I direct creative in real life:

  • start with people

  • ground ideas in lived observation

  • use contrast to unlock meaning

  • let tools disappear into process

The result is work that feels immediate, human, and slightly disarming—because it’s meant to be.

This is not an “AI project.”
It’s a directing people project, executed with contemporary tools.

That distinction matters.

12.13.25

SHUR Creative Partners

A small series of experimental idents created with Limor Shur to extend and reinforce his negative-space social campaign. Each piece starts with B&W AI-generated imagery, a lil typography mo-graf - then finished with my original score.

Jezus of Nazeré

1. There is a surf spot in Portugal called Nazaré

2. I like to play with words

3. I generated a few images in Mj and Nano and sat on the concept for a few weeks. the monk overlooking the huge tidal wave. deep existential. as if starring death in the face. life is bigger! no fear -acceptance.

4. as with everything i make, more time passed and i found this song / soundtrack : C1 · Zeitkratzer · Carsten Nicolai
it's just an incredible slow draw of a bow across what sounds like a cello. it's epic. and all about restraint and patience. slow. so antithetical to our modern world.

5. I had it in AE and remembered the Jesus of Nazeré renders, so i grabbed the nicest one and time stretched it down, almost 1000%. its amazing how the frame blending actually held up. the wave cresting is so slow most people probly just thought it was still. you have to stop. and watch to realize that there is motion. and i love that. it's like watching 1000000 pictures change slowly before your eyes.

It would be amazing to experience several pieces like this in a gallery. on huge OLED screens.

The White Room Edit

The first piece I generated in Sora2. It’s about life. Frustration and being put in a box.

All video and audio is generated in Sora2. Edited in Premier.