Spherical rendering: Initial tests

Standard

I had originally planned on having a fully rendered, stereoscopic turntable ready for this week’s check-in, but I grossly underestimated how long it would take to render one. I left it running overnight, but I’m still on frame 81 out of 300 — technically double that, considering I’m rendering from two cameras at once.

Here is a single frame of what’s currently getting rendered (Left eye):
dreadStereo_compTest

 

I was able to get Mental Ray working, but VRay for Maya still isn’t cooperating with the DomeMaster plug-in. According to the plug-in’s Github repository, It should work if I run it through VRay’s Standalone / command line renderer. I’ll aim to have a test of that done for next week, in addition to getting my mental ray renders polished and working.

VRay Standalone DomeMaster Render via https://github.com/zicher3d-org/domemaster-stereo-shader

This week’s renders will be viewable on the GearVR by formatting them in a top-down configuration and putting the finished .MP4 files in the root/Oculus/360Videos folder of my Galaxy Note 4. The built-in Oculus 360Video viewer plays videos as Monoscopic by default, so I need to rename the video files as “namingConventions_TB.mp4” to tell the viewer to split the files horizontally and playback as stereo. The aspect ratio of each eye will be compressed to 4:1 (so, 1920 x 216). When I test VRay next week I’ll try to render in at least 2K, to test the limits of my phone’s playback.

Top/Bottom stereo configuration for latlong videos via http://elevr.com/cg-vr-2/

The creator of the DomeMaster plugin also has a ton of really interesting dome/vr production tools on his downloads page. Some are free and some are paid, but they might be worthwhile purchases for the department if Drexel goes forward with creating a VR animation class, like Nick has mentioned.

A few noteworthy examples:

Playblast VR for Maya: Playblast in a dome / stereo VR format. Different from the Domemaster in that it renders nurbs, scene elements, etc — just like a playblast. Immersive animation breakdowns sound neat!

Dome2rect: Advertised as a turnkey way to make rectangular trailers from VR/Dome dome imagery.

Domemaster Photoshop Actions: Solves the problem of quickly converting between Dome and VR formats.

Fusion VR Macros: VR Compositing presets for Blackmagic Fusion. Might be worth looking at for workflow logic when moving to Nuke.

Powerpoint Dome: Even slideshows are getting on on VR!

So, to conclude…

Goals for Next Week:

  • Mental Ray VR Turntable
  • VRay VR Turntable
  • Stitch & stereo convert background plates
  • Research for the following week: compositing stereo-spherical video in Nuke

Stereo-Spherical Rendering Research

Standard

Week #2 Goals: 

  • Write “Dreadnoughtus Experience” script (v01)
  • Review monoscopic Montana spheres
  • Research mono-to-stereo conversion
  • Research 360-degree stereo rendering solution
  • PRODUCTION: Build Patagonia scene
  • TESTING: Test stereo renders on GearVR and Cardboard

“Dreadnought” Script Progress

With the goals for this term laid out, it’s time to begin putting serious thought into the narrative structure of my thesis. I’m aiming for a short, 2-4 minute documentary-style experience guided by narration from Dr. Ken Lacovara. I’ve finished the first draft of the script, complete with awful placeholder narration! (I’ve established a monthly meeting time with Dr. Lacovara, who will assist in writing his dialogue.)

WIP Script via www.celtx.com

The narrative of “The Dreadnoughtus Experience” is broken into four locations: The Academy of Natural Sciences’ Dinosaur Hall, a Patagonian desert, a fully-digital virtual scale comparison environment, and Logan Square (outside the Academy). Each location has a range of challenges associated with it, but should all rely on a similar pipeline for capturing, rendering, stitching, and compositing. Let’s find out what that entails!


Monoscopic Sphere Conversion

My research partner Emma Fowler recently captured a small library of 360-degree photo spheres during a trip to Central Montana. Due to the landscape’s similarity with Dread’s excavation site in Argentina, we will use these images as the base plates for the Patagonian Desert scene. Emma shot the photo spheres with a Canon 1D Mark 1 camera, a tripod and a Nodal Ninja 3, an attachment that allows for incremental rotation around a lens’ convergence node. My Adviser Nick J. then ran the photos through Kolor Autopano, resulting in a high-quality, seamless, 360-degree panorama.

Montana Panorama via Emma Fowler

Montana Panorama via Emma Fowler

Next, the image must be converted to stereo by masking out objects of different depths and projecting them onto proxy geometry. For this section I will be following the fxguide “Art of Stereo Conversion” article as reference. The article is from 2012 and doesn’t cover a spherical workflow, but it breaks down all steps of the VFX pipeline when working with stereoscopic imagery and will be a great reference. Stereo conversion involves isolating elements at different depths, projecting them onto proxy geometry, and rendering them from a stereo camera rig. My background plate is a still image (and therefore much simpler than the examples in the article,) but I’ll still need to figure out a workflow for compositing on a warped spherical background. .

Stereo conversion and projection in Nuke via http://www.fxguide.com/featured/art-of-stereo-conversion-2d-to-3d-2012/

The scene will then be rebuilt in Maya, using measurement data to accurately recreate the background on which to animate CG elements. Emma shot the spheres with 3-step exposure bracketing, so I can produce an HDR sphere to match the lighting to a passable degree. After finishing the scene and animating the moving elements, the next challenge will be rendering the scene in stereo.


360-Degree Stereo Rendering

I’m planning on rendering this project in VRay 3.1 for Maya, which is sadly one update away from having built-in VR support. VRay 3.2 for 3DS Max can produce stereo-spherical imagery formatted for the Samsung GearVR and Oculus Rift DK2, but I’ll have to find another solution.

So near, yet so far… via http://www.chaosgroup.com/en/2/news.html

I recently discovered an exceptionally thorough series of tutorial videos from EleVR, a research group dedicated to testing VR methods. This series does an excellent job explaining the unique challenges associated with spherical stereoscopy, and even includes helpful animated gifs! (I will absolutely be using / citing these in my future presentations. 🙂 )

Via http://elevr.com/cg-vr-1/ :

Monoscopic Spherical Imagery (one eye looking in all directions)

Incorrect Stereoscopic Spherical Imagery (two eyes rotating around in their sockets)

Correct Stereoscopic Spherical Imagery (A camera rig emulating two eyes with one convergence point)

The article goes on to explain that there are two main methods configuring the cameras in a stereo rig, Converged and Parallel, with Off-Axis configuration “splitting the difference.” A presentation from www.sky.com entitled “Basic Principles of Stereoscopic 3D” excellently explains the pros and cons of these methods (as well as defining a glossary of relevant terms).  Because this blog is long enough as-is, I’ll just go ahead and say that Off-Axis is generally the best bet for comfortable spherical stereo. I’ll elaborate more in my thesis!

Off-Axis Stereo Configuration via http://elevr.com/cg-vr-1/

Once the camera rig is good to go, it’s time to start rendering a stereo image. This is a tricky process due to the omni-directional nature of spherical video. Rotating the entire camera rig around a central node means that for each still frame, neither camera will have a fixed position. It’s important to capture information from all 360 degrees of rotation, otherwise the stereoscopic offsets will converge when a user looks to the side.

So, how can you create a single spherical image out of a moving camera?

Methods for producing stereoscopic panoramas have existed for quite some time– just look at this paper from Paul Bourke, circa 2002. Through a process called “strip rendering” or “roundshot,” each camera in the stereo rig renders a 1-degree vertical slice while rotating around the rig’s central node. These slices are then stitched together into a stereoscopic set of image spheres.

Strip Rendering via http://paulbourke.net/stereographics/stereopanoramic/

This process is, sadly, time consuming and labor intensive. It works well for still imagery, but would be an insane undertaking for the suggested 60 frames per second of low-latency VR video. Thankfully, we can just use the incredibly helpful Domemaster 3D Shader by Andrew Hazelden. The Domemaster plugin works with Maya and Mental Ray, and has output options for different stereo configurations and spherical transform formats. According to the shader’s Github repository, it also supports Mental Ray, Arnold, and (thankfully) VRay for Maya. There’s even a convenient shelf! 

This is the current collection of Domemaster3D V1.6 Shaders.

Assuming that the shader works with the VRay licenses I have available, the Domemaster plugin is the most viable solution for rendering stereoscopic spherical video at a speed reasonable for a VR animation pipeline.  The EleVR Maya tutorial details out the steps, and although they use Mental Ray I expect similar results for my tests this week.

Domemaster tutorial via http://elevr.com/cg-vr-2/

…Getting this all to work on Drexel’s render farm will be a story for another day.


Testing & Implementation 

I’m confident in the validity of my research, but I still need to test my assumptions. I’ll write a follow-up post with the results of my tests in the near future!