Entries in set-reconstruction (2)

Thursday
Oct202011

Solving 3D points for tracked cameras

When you're creating geometry for a tracked camera, it's usefull to be able to use a calculated point cloud as a basis for the placement of the geometry. If you know what I'm talking about, you probably ran into the problem once of needing additional points in 3D space, that were not in the point cloud that came with the camera track. While most tracking software has good options for solving additional trackers after you've solved a camera, this is generally not what you want to do... It could be the shot is already handed over to 3D or compositing for example, or that the features you need the 3D position for are very difficult to track, covered by something for a big part of the shot, or many in number (and therefore time-consuming).

Because I like to do most of my geometry reconstruction in Maya, I use a simple but very powerfull trick to work out the 3D position of any point in 3D space, for which you only need a camera track and 2 frames where the same feature is clearly seen from a different angle.

How does it work?

When you work in Maya, download and install the Python scripts I wrote to simplify the workflow and keep focus on the tracked camera view during all steps - mdg_matchmoveScripts_v0.1.zip

  1. Choose the feature you need the 3D position for, move the Playhead to a frame where it's clearly visible and change the Viewport focus on the tracked Camera Pane,

Having the Viewport focus set correctly is important for the scripts below to work as expected.

  1. Click the 'autoPlaceLocator' shelf button to create a Locator in the center of the tracked Camera Pane,

You can also use a standard Locator (or Null), but with the 'mdg_autoPlaceLocator' script you never have to look for the Locator when the origin is outside the tracked camera view. The script also takes all ingoing transform nodes on the tracked camera node into acount.

  1. Select the Move Tool or press w,
  2. MMB drag the Locator to the feature (in the plate) for which you need the 3D position,

Because you are looking through the tracked camera, this determines the X and Y positions of the Locator in camera space, leaving the Z position unknown.

How to determine the X and Y positions of an image feature in camera space

  1. Click the 'movePivotToViewportCamera' shelf button, to move the Locator pivot to the position of the tracked camera in 3D space,

You can also do this with the standard Maya functionality by looking for the Locator and tracked camera in another perspective view, but with the 'mdg_movePivotToViewportCamera' script you never have to leave the tracked camera view.

  1. Move the Playhead to a frame where the feature is clearly visible from another angle,
  2. Select the Scale Tool or press r,
  3. MMB drag or use the Channel Box to change scale values until the Locator matches the feature again (while leaving scale values equal for x, y and z),

Basically steps 4 and 5 determine that the position of the Locator should be somewhere on a line in 3D space. By scaling the Locator with its pivot on the first camera position, you can move it along that line. Looking at the feature (and the line it should be on) from a clearly different angle, makes it possible to work out where on that line the Locator should be exactly.

  1. Move the Playhead to check the accuracy of the 3D position, and fine tune scale values if needed.

How to work out the missing Z dimension of the image feature position by using a second camera angle

Finally the obvious should be stated, that this technique only works on shots with parallax, and that it's accuracy is largely dependent on the accuracy of the camera track. Also note that while I explained my workflow in Maya here, none of the steps are really specific to Maya and can be applied in any other 3D or compositing software. In Nuke for example the "PointsTo3D" node does a similar thing.

 

Friday
Sep022011

Portal: No Escape - Hallway sequence

Earlier this year I worked on a dozen shots for the short film "Portal: No Escape", directed by Dan Trachtenberg, together with a remote team of vfx artists supervised by Jon Chesson. The film, with the talented actress Danielle Rayne in the leading role, premiered at this year's Comic Con in San Diego and was released online last week, reaching viral proportions of over 5 million views in less than 6 days, and marked as the 'most liked video' in August (recommended viewing in 1080p HD in FULLSCREEN with sound UP):

 

After the initial contact with Dan was made via Twitter, I was asked together with my good friend Marijn Eken to do the matchmoving for this project, where I would be responsible for the Hallway-sequence and most of the Roof-shots. In this blog post I'm hoping to give a little insight in some of the matchmove work that was done on this amazing short film.

The film was shot using a Red One camera, using a set of Lomo Anamorphic lenses of 35, 50 and 75mm, where most shots were done with a 50mm. The tracking plates were handed to us in the form of a (quite dark) 1080p unsqueezed and letterboxed Targa sequence, with no additional info on the original Red and anamorphic modes that were used during the shoot, nor how the footage was cropped to HD.

Hallway sequence thumbnails and shot numbers

Image-distortion

One of the first things that stood out when I studied the tracking plates, was the very prominent high frequency rolling shutter, especially at the end of the sequence (see: image wobbles in shot 17 and 18).

A logical tool for tackling this image-distortion problem, that is often experienced in CMOS cameras, is the Rolling Shutter plug-in by The Foundry. However, using their suggested settings (a correction value of 0.32 for the Red One camera) introduced serious artefacts in the corrected footage, even after experimenting with the other input parameters of the plug-in:

Example of rolling shutter correction artefacts in shot 18

These artefacts could be 'fixed' by a paint artist later in the vfx pipeline, but they can also mess up a 3D track, because tracking points might be 'jumping' on bad frames and therefore not deliver accurate results. This led us to decide to work with the original tracking plates and see how far these would get us...

Next step in the 3D tracking workflow is removing a possible lens distortion from the tracking plates. Because no lens distortion grids were available here, I used 'checklines' to unbend lines in the plate that were assumed to be straight in reality. Unexpectedly this led me to conclude that this unsqueezed anamorphic footage had no significant lens distortion.

Camera tracking

To get a good first solve for the camera motion, I decided to ignore the whole rolling shutter problem for a moment, and just tried to track the original tracking plates...

Using SynthEyes this turned out to be pretty straightforward. For most shots it was nothing more than increasing the gamma to make possible features in dark areas more aparent, use the Auto-tracker to track the shot, manually delete the bad trackers, solve the shot with a known 50mm lens on a 24.4 x 13.725 mm sensor and finally orient the ground plane in a similar way for each shot to simplify Hallway reconstruction later on. The only shot that needed a different approach was shot 13a, which was almost completely out of focus. By carefully placing 11 manual trackers (using SynthEyes' very efficient workflow for supervised tracking), I was able to solve the shot with the same lens and sensor.

However, even though the camera solve wasn't corrupted by the rolling shutter, the wobble in the plates of course remained... and thus some of the solved 3D positions for the 2D trackers were still sliding in areas were the high frequency rolling shutter appeared. Realizing that I would probably have to face this problem at some point, I decided to build a proxy model for the Hallway first...

Hallway reconstruction

My initial idea was to build one Hallway model that matched the plates in all the shots, so the scene scale and a position within the Hallway would be consistent from shot to shot, which would be a great benefit for the particle simulations that needed to be done later on. This proved to be quite difficult, partially due to the green screen Portal stand-in on the wall changing from shot to shot, and because the walls seemed to be slanting more in one shot than the other.

I started out building a proxy model for shot 14, which had the best view through the Hallway towards the start. Then I copied this model to shots 13 and 16 which have the same viewing direction, changed the scale, position and orientation of the new scene (where needed) to match the model from shot 14 as close as possible, and finally fine-tuned the geometry in areas where differences between shots would appear... In this way the proxy model could vary a little from shot to shot, but consistency in scene scale, position and orientation would be maintained.

Proxy Hallway model for shot 14 (with scene origin)

Continuing with the shots viewing at the other end of the Hallway, again I used the proxy model from shot 14 to build forward on, and used the same principles to reconstruct a matching geometry for the remaining shots.

3D rolling shutter simulation

Once I had a proxy model for the entire Hallway in place, the idea was born to simulate the effect of the high frequency rolling shutter in 3D to get a better match, by deforming the model slightly from frame to frame. Obviously this was a cheat that might help with compositing, but one that should never be used as input for any particle simulations, because they can be influenced by the vibrating walls and ceiling.

An important thing to realize here is that high frequency rolling shutter can be a very local problem. The ceiling might be vibrating up and down while the floor is not, the left wall might be vibrating left and right while the right wall is not, or the right wall might be skewing close to the camera while in the distance it is not. But because in this sequence the camera was always looking along the length of the Hallway, I suspected this image distortion could be simulated by using a lattice deformer.

After some experiments showed that a 3x3x3 lattice around the complete Hallway model was the ideal setup, I was able to find a perfect match for every frame of the sequence by animating individual lattice points in areas were the high frequency rolling shutter appeared:

3D rolling shutter simulation (using a 3x3x3 lattice) in shot 17 played at half speed

Segment of the Hallway matchmove in shot 17 before and after 3D rolling shutter simulation

Portal gun reconstruction

For making believable gun blasts, particle work and contact lighting on the Portal gun, a seamless object track and an accurate reconstruction of the Portal gun were essential.

Before reconstruction I would normally place trackers on the Portal gun and calculate a motion path and point cloud first, but because of the lack of trackable features on the gun and the fact that in a lot of frames the black parts of the gun almost disappeared in the background, this workflow was bound to fail.

So I started with building a proxy model for the Portal gun based on available reference photos:

First rough version of Portal gun model based on available reference photos of a damaged gun

Reference didn't get me all the way there though, because the gun was slightly different from the one in the tracking plates... the barrel was damaged (probably during the shoot) and I found out it was able to slide back and forth, also causing the cables to bend.

So to be 100% sure my proxy model for the Portal gun would match, I needed to revert to the tracking plates and fine tune my first rough version:

Fine tuned version of Portal gun model based on 3 viewing angles in shot 14

Basically this meant that I needed to setup the Portal gun model for rotomation (more on this later), and position and orient it on a number of 'key' frames in a shot with clearly different viewing angles on the gun, giving me enough information to fine tune the proxy model.

Unfortunately though this model couldn't be used for all shots, because it appeared that they used a second 'stunt' version of the gun in shots 17 and 18 where she's fighting off the guard. The shape of this 'stunt' version was so different from the 'original' Portal gun, that I had to build a separate model for it:

Adapted 'stunt' version of Portal gun model based on 5 viewing angles in shot 18

Portal gun rotomation

While normally, having an accurate proxy model for the Portal gun would be an ideal starting point for a geometry track, just like the regular object track for this sequence, it was bound to fail... In some shots the black parts of the gun were almost indistinguishable from the background, and in others the gun was moving in and out of sight entirely. Alltogether this made rotomation the best way to go.

First thing to do before starting any rotomation, is making sure the pivot point and rotation order are chosen wisely, and the scaling is worked out properly. Luckily I got some measurements from set which gave me good starting point to set this up:

  • width of hallway = 5' = 1.524 m
  • length of actress = 5' 3½'' = 1.613 m
  • length of gun = 27" = 0.686 m

Using this information, I scaled, positioned and oriented the model of the Portal gun inside the reconstructed Hallway model on a number of 'key' frames in shot 14. Then, while everything seemed to look right in frame, I did the same thing in a couple of other shots, to make sure the scaling I found in shot 14 would place the gun in the Hallway correctly in all the other shots of the sequence as well. To illustrate the importance of this step, check out how much influence a proper scaling has on the positioning of the gun, noting that the rotomation of the gun matches perfectly in frame for both scaling values:

TIP: Motion path shift occurs when the wrong Portal gun scale is chosen before rotomation

Once the scaling for the Portal gun was locked down, the process of rotomation was basically just animating the gun by hand to make it match on every frame, while making sure that the motion path of the gun was looking realistic in 3D. For every shot I would start with a correctly positioned and oriented gun on a number of 'key' frames in the shot, and fill in the areas in between from there...

While doing this, it proved to be very important to keep an eye on the animation curves... Working from 'key' frames can sometimes result in 'unnatural' jumps in the animation curves, which might look fine in both 2D and 3D, but they can cause strange problems when particle simulations are attached to the gun. Reason for this is that particle simulations not only look on-the-frame only, but also calculate substeps between frames, which look a lot different in both versions of the curves:

TIP: Rotation curves with 'unnatural' jumps seem correct but can have a strange effect on particle sims attached to the Portal gun

Scene delivery

One more thing to mention before showing the final result is that, while FBX worked fine for getting all this tracking data from Maya to Softimage, Nuke was having a bit of trouble importing all animation curves correctly. Luckily I ran into this very handy FromMaya2Nuke Python script, that automatically prepares a Nuke scene including animated camera and obj-sequences for all moving geometry, making both my life and the compositors' lives a lot easier...

Final result

Of course I can't finish this post without showing the final result... so here it goes, with compliments to Dan Trachtenberg, Danielle Rayne, Jon Chesson, Paul Griswold and others who all did a great job on this sequence (recommended viewing in 1080p HD in FULLSCREEN):

 

Also check out what other team members say about working on this project: