This is incomplete, I’m just trying to get some version of it out there
After years of having it in the backlog, I finally watched the livestream of this Academy event about the parallels between the filmmaking of the original Star Wars and Rogue One. www.oscars.org/events/ga…
Along with seeing some of the tricks used in the original film, and ways those had direct translations in Rogue One (the coolest being that Rogue One did end up creating virtual model kits that could explode “realistically” as if they were kitbashed); the coolest thing by far was seeing the virtual camera setup created for Gareth Edwards to block out CG shots cohesively.
It was all assembled with off-the-shelf 2016 parts (an iPad, some game controller, and an HTC Vive taped on the back!) That, combined with an actual understanding of filmmaking, made a film shine in ways that CG scenes hadn’t fully nailed prior. And maybe since? Fully CG scenes still feel too abstract.
So, it’s been 9 years since Rogue One; even longer for the production schedule. And from what I’ve seen tech just…hasn’t significantly advanced in terms of allowing that artistic expression. Not to mention, there are actual physical limitations to what we can perceive. At some point, the difference between 4K and 5K becomes…moot? At the very least, not worth the effort IMO. Arguably, things have gotten worse because we stopped looping back for refinements.
You can see versions of this with the limitations of Volume, an iteration of ILM’s StageCraft that’s been heavily used for a bunch of Star Wars projects. It’s an admittedly cool tool for hyrbid sets, but it’s jarring once you notice that most Star Wars shows shot in The Volume are blocked as if they’re stage plays.
The point being: The Volume is a specific tool, designed to solve a specific set of problems. Just like how the duct-taped virtual camera allowed for quick, cheap blocking of fully CG shots. And The Last Jedi’s use of physical models copy/pasted into CG shots ground the scene.
A prevailing mantra of tech for the past almost-decade is the idea of an omnitool: a small subset of ~tools~ services that act as a panacea. But that’s not how collective endeavors work! The difficulty, challenge, and joy is finding the right makeup of existing tools & some new techniques to push a project to completion. Then, going back and figuring out/teaching that new technique!
No matter what AI Chuds try to tell us, there’s always value in doing the work itself. There’s still so much ground to cover with our existing tools & approaches. Working faster/cheaper/more vibsey isn’t going to make something better, especially with how rickety our foundations currently are.