A primitive HFR test, and cubes. Of course.
Posted 30 December, 2012
It's shiny, it's new (in a buzzword context, not in a 'lets shoot at high frame rates' context). It's also something that's not that hard to play with at home once you get curious... and well, HFR is kinda neat. I've a soft spot for higher data rates.
Having just seen The Hobbit this weekend (note: I'm not lazy, it only got released in Australia on the 26th..), and finally seeing what the deal was with this HFR thing, I almost got into that trap of speculating and discussing the relative pros and cons based on the movie... and then I caught hold of myself - I'm an engineer dammit. And I have a computer. Why the heck don't I just create a fairly artificial, but potentially instructive test?
It's not like I can't render something out at varying sample rates, and even better, I can avoid being constrained by little things like say, cameras, and working out how to reproduce the real world.
For the purposes of this example, I'm calling HFR as 48fps with a baseline of 24fps. It'll do for this test, but note that I'm being very loose with my terminology. Also not that this is highly artificial, and that in the Real World (esp if I had to match a 48fps plate..) there be a lot more to worry about and little artifacts to manage.. but hey, even stuff like this is instructive.
Anyhoo, after all of that, the TL/DR: I can see why HFR works.. but I can't quite tell why I felt like it didn't work so well in the cinema.. (that's another question though). As this is all just experimentation, there's not really any conclusions or massive takeout, just that if you're at all curious, you might as well just play with it yourself. It's fun!
Setting up an experiment
Since my major curiosity is on how HFR impacts on motion, it's time to invoke the rule of the falling cube. 27 of them in fact. Some basic rotations are applied, a bit of a heave up, radial out, and then letting gravity and the magic of RBD to create motion. There's a mix of slow and fast here and linear and rotational. Of course it's not a perfect, hypertechincal HFR test but as I've learnt - quick is good.
To approximate reality (which runs somewhat higher than 48fps..) the dynamics were done in the scene at the higher frame rate, then keys baked out and framerate dropped back to 24fps with Maya being allowed to interpolate. That should have given me the same motion regardless of how many frames I'm using (a curve is a curve is a curve)
From that, I've rendered out two completely independent sequences and the associated motion vector pass. Materials are basic MIAs so there's a bit of shinyness (incase that was going to show off anything else interesting) and there's some cheap AO/FG effects in there just for prettification.
Handball to Nuke, and the only effects I'm applying will be the vectorblur node, and packaging up into H.264 encoded quicktimes. Eaaasy.
The baseline
To set the scene, here's a literal side by side with HFR-A* (i.e. dropping every other HFR frame). I've hue-shifted one of them^ so you can clearly see the dividing line and see how closely the two of them match up. You can see a little bit of the difference in position between the two when the cube catches the light (see frame 47) but overall the nature of the motion is consistent. Doing the same with HFR-B is a bit more distracting since it's further out of sync (it's sampled half a frame later) but the overall characteristic of the motion remains unchanged.
Some raw quicktimes for you to play with
Since this is primarily an anim test, I've prepped out four files, two with motion blur, and two without (there's a bonus 'super-extra-blurred' file too). In both renders I'm using the same normalised motion vector pass with a max pixel displacement of 256, so theoretically I should be able to use the same vectorblur settings. Inspecting the movec pass pretty much bears this out - for the same region on Baseline the normalised X/Y displacement is say, 0.02284/-0.0264 (about 30 pixels), while for HFR-A and HFR-B, it's roughly half that. Makes sense too since for each HFR frame, the relevant pixel has 1/48th of a second to move instead of 1/24th So, without prejudicing your thoughts, here's some raw data:
- the Maya 2012 scene files: HFRtest-scenefiles.zip (1,428 kb)
- a baseline file with no blur: baseline-noblur.mov (4,024 kb)
- a baseline file with motion blur: baseline-moblur128.mov (4,058 kb)
- the HFR equivalent with no blur: HFR2-moblur.mov (8,111 kb)
- the HFR equivalent with the same amount of blur: HFR2-moblur128.mov (8,076 kb)
- the HFR equivalent with twice the blur: HFR2-moblur256.mov (8,044 kb)
- and if you really want, and you asked nicely enough, I guess I could upload the EXRs somewhere.
You may want to use a flipbook to make sure you're getting the appropriate framerate btw.
Observations & Thoughts
I spent 20 mins eyeballing the normal 24FPS files (with and without moblur) and then going to HFR was a bit of a shock, it _does_ look a bit more errm, well, sharp for lack of a better term. There's a noticeable additional level of data that I'm processing and assimilating that I adapt to pretty quickly.
Going *back* to 24FPS after getting used to HFR is a shock too, it immediately feels jittery, as if I can see the substeps in the scene. Motion blur doesn't help any here, obviously. Again though, adaptation doesn't take long before this looks 'fine' again, but I can still now sort of see frame steps, and yes, well. I can understand why once you've acclimatized, it might be hard to go back.
I rather like the feel of HFR - but that's only in this test. My largest scale sample (i.e. The Hobbit) wasn't nearly as positive so I'm a bit wary here. I still can't shake the feeling it's not an artifact of the increased frame rate, but some kind of glitch. I dunno, it felt too much like a sync issue when it felt wrong (very opening scene with the old Bilbo for example).
Artificially doubling the motion blur actually works decently, for some odd reason. Must keep that in mind.
As I'm writing this, I've got both the HFR and moblurred baseline clips looping out of the corner of my eye, and it's actually really noticeable which is which, more than if I'm staring at them directly.
It's only in high movement zones that I'm getting a lot of benefit from HFR, but it's taking a straight x2 on cycles to generate the frames. More if you count the extra processing/storage/etc. Kind of a 'handle with care' situation - do you really want to double your storage/management/render/etc requirements? Didn't think so...
Upsampling is the devil's work. No, really. I had a stab at upconverting the 24fps baseline to 48, but quickly realised that it was either a simple task (motion blur and hold frames for double the time where there's low motion) OR you're going to get absolutely craptastic results for the effort you're putting in. This may not hold as techniques get developed (for example: Luke Letellier's work) but it's still kinda clear that trying to create data where none exists is just going to hurt. You're better off taking the render hit, getting the data and throwing it away when it's not helping you (see the initial HFR-A splitscreened with baseline) IMHO.
Experimentation rocks. Why the hell would I spend hours on a forum debating this when I could just go out and render out 500 frames to actually see for myself? :)