This case starts with something most artists don’t like admitting: loss.

A “Lost Sketch” from c. 2009
[AI Upscaled]
Image 1 is a character that only survives as a low-resolution digital scan. The original drawing is gone. I don’t know where it went, and I probably threw it out around the time it was scanned, when it was already a decade old. That wasn’t unusual. I lost a lot of sketches over the years. Paper degrades, folders disappear, life moves on. What matters is that the scan is now the only remaining anchor tying that character to reality. Without it, the character would simply not exist anymore.

AI Color & Detail Test
[No Cleanup – No Post]
Image 2 jumps forward to the present and flips the usual AI narrative on its head. This wasn’t about “letting the machine finish the art.” It was a stress test: how low-res could I go and still recover something recognizably mine? The answer was uncomfortably low. Flat base color, Flash-era shading logic, and eyes that sit in that slightly uncanny middle space my work has always had—probably a side effect of years inside Flash rather than any intentional stylistic choice. “Liminal” isn’t quite the right word, but close enough to get the point across. The takeaway here is that style isn’t stored in pixel density; it’s encoded in decisions, habits, and visual shortcuts learned over decades.
Video 1 pushes that idea into motion. This is the first test of taking an augmented sketch and making it move, using unreleased animation tests from years ago, in-game motion data, and custom renders done at 48 and 60 fps specifically because 24 fps just doesn’t hold up anymore. There’s nothing sacred about 24 fps—it was a technical compromise that got mythologized. Once you remove that assumption, everything looks cleaner, more responsive, and more in line with how people actually perceive motion today. This isn’t AI replacing animation skill; it’s animation knowledge being reused aggressively.
Video 2 is where people usually start getting loud. A fully AI-constructed walking model, derived entirely from the same sketch-to-AI-to-video pipeline you just watched. No external characters, no stylistic borrowing, no secret sauce lifted from someone else’s IP.

A “Lost Sketch” from c. 2009
[AI Upscaled]

AI Color & Detail Test
[No Cleanup – No Post]
AI Animation test of Color & Detail Render
New Character Data using over 4000 frames from previous footage as source
So the obvious question gets asked: who is this stealing from? And the honest answer is nobody. This is a closed loop. The source is my own work, my own animation data, my own aesthetic constraints, fed back into itself through a machine that accelerates reconstruction rather than invention.
That’s the part that gets missed when people throw around words like “slop.” What’s actually happening here is time compression. This is about clawing back years that were burned dealing with unreliable collaborators, endless cleanup passes, and human bottlenecks that had nothing to do with creativity and everything to do with logistics, ego, or self-destruction. For someone who’s spent decades learning animation the hard way, being told this is lazy or unethical doesn’t land as critique—it lands as ignorance. The machine isn’t replacing craft here. It’s finally respecting it.



