Dreadnoughtus VR: Teaser Photos!

Gallery

Hey everyone! Here are a few preview screenshots from the current draft of DreadnoughtusVR. There are still a few things to do before my defense next week, but things are really coming together.

Be on the lookout for the public release via JauntVR on July 25th!

Immersive Paleoart: Conference Presentation Announcements

s16_1140x580_fb
Standard

I’m very pleased to announce that my thesis, entitled “Immersive Paleoart: Reconstructing Dreadnoughtus schrani and Remediating the Science Documentary for Cinematic Virtual Reality” has been accepted to three conferences this summer! I will be presenting my research in Villanova, Barcelona, and Anaheim:


WICT  “Tech It Out” 2016: Breakout Session
http://www.wictphiladelphia.org/event/save-the-date-tech-it-out-2016/
June 30, 2016, Villanova (Pennsylvania)

“This year’s Tech It Out will provide a basic understanding of Virtual and Augmented Reality, its current and future applications, how industry and research are driving it – and its potential for transforming our industry and world. The event will include demos, speakers and presentations from key innovators and vendors in the VR space, a panel discussion and breakout sessions with opportunities for Q&A and networking. The event brings together representatives from all facets of this emerging industry including video, TV, social media, gaming, health, communications, sports, music, robotics and more.”

 


 

EDULEARN16IATED EDULEARN16:  Paper Presenter
8th annual International Conference on Education and New Learning Technologies
https://iated.org/edulearn/
July 4 – 6th, 2016, Barcelona (Spain)

“EDULEARN is one of the largest international education conferences for lecturers, researchers, technologists and professionals from the educational sector. After 8 years, it has become a reference event where more than 700 experts from 80 countries will get together to present their projects and share their knowledge on teaching and learning methodologies and educational innovations. The 2016 edition of EDULEARN is sure to be among the most successful education conferences in Europe.”


 

SIGGRAPH2016: Poster Presenter & Drexel Expo Booth
The 43rd International Conference and Exhibition on Graphics & Interactive Techniques
http://s2016.siggraph.org/
Anaheim (Caliefornia) July 24 – 28th, 2016.

“SIGGRAPH is the world’s largest, most influential annual event in computer graphics and interactive techniques: Five days of research results, demos, educational sessions, art, screenings, and hands-on interactivity featuring the community’s latest technical achievements, and three days of commercial exhibits displaying the industry’s current hardware, software, and services.”

  • In addition to the SIGGRAPH poster presentation, my short CVR documentary “DreadnoughtusVR” will be played in an immersive dome provided by Spitz Inc. at the Drexel expo booth. Please come by and say hello!
  • If any of my Philly-local friends would like to see what this is all about, I’ll also be presenting my poster at Drexel’s Digital Media Senior Show on June 5th, 2016.

 

Thesis Defense: June 13th

Standard

THESIS DEFENSE

“Immersive Paleoart: Reconstructing Dreadnoughtus schrani and Remediating the Science Documentary for Cinematic Virtual Reality”


By: Valentina Feldman
Date: Monday, 6/13/2016
Time: 11:00am – 1:00pm
Building: The URBN Center, 3501 Market St. Philadelphia, PA 19104
Room: URBN Screening Room 349

 

Light refreshments will be provided!

 

This event will conclude with a screening of DreadnoughtusVR.

 

Watch the Trailer: https://youtu.be/WXlZxbD-W9s
Inline image 1

 

ABSTRACT: This thesis is a synthesis of digital paleoart reconstruction, prototype VR pipeline design, and the adaptation of structural narrative principles for immersive media. We approach common issues associated with the accurate portrayal of dinosaurs in media, Cinematic Virtual Reality (CVR) production, and the direction of viewer attention in immersive digital environments. After developing and testing a stable CVR workflow, we design and produce a piece of scientific VR Paleoart content intended for educational outreach. Our production methods include a highly detailed CGI dinosaur reconstruction informed by comparative anatomy and biomechanical simulation, stereoscopic spherical rendering, and immersive film production. Our approach is validated through the completion of a CVR documentary about the titanosaur Dreadnoughtus schrani, one of the largest dinosaurs yet discovered. This documentary, starring paleontologist Dr. Ken Lacovara, will be made publicly available for all common VR distribution platforms through a partnership with JauntVR. Our goal is to deliver immersive scientific content to a wide audience of smartphone owners, taking advantage of accessible VR technology to establish new precedents for educational media.

Feel free to invite guests, but please RSVP so I know how much coffee to bring!

Dreadnoughtus VR: Behind the Scenes (time lapse)

Video

 

Last I updated, I wrote at length about the experimental techniques used in our “Cinematic Virtual Reality” film shoot for Dreadnoughtus VR — Coming June 2016 to a smartphone near you. Now you can watch our entire process in a mere fraction of the time! (Specifically, 0.0012037~.)

Thanks again to all who volunteered their efforts on the day of our shoot to make this happen!

Check back next week on Earth Day (4/22/2016) for a first-glimpse spherical teaser trailer.

Immersive Paleoart: Shooting 4K VR Video

Standard

Over six months of research, equipment rigging and pre-production, multiple failed and successful stitching tests, and the hard work of 7 crew members and over a dozen collaborators paid off last week in our first VR film shoot for “Project Dreadnought” (working title). Here we captured an interview with world-renowned paleontologist Dr. Kenneth Lacovara, who joined us in the Dinosaur Hall at the historic Academy of Natural Sciences. The 3D spherical video captured during this experimental shoot will serve as the framing device, narration, and compositing background plates for an immersive paleoart encounter with Dreadnoughtus schrani, coming June 2016.

20

It was a great pleasure working with Dr. Lacovara on this shoot. He very eloquently told the story of Dreadnoughtus, from the discovery of the first fossil in the arid badlands of Patagonia, through the prep work and 3D laser scanning project, all the way to the cutting-edge robotics and Virtual Paleoart research presently taking place with the digitized fossils. Lacovara’s narration broke down complex biological processes into a digestible language, which will guide our audience through the virtual restoration of Dreadnoughtus. Even better, he did several segments in one complete take, preserving the immersive construct of VR film by eliminating any need for cut-aways and editing.

18

Our footage was captured on two Blackmagic Micro 4K cinema cameras, each equipped with a 190-degree fish-eye lens. The cameras were mounted on a gimbal arm set at the nodal point of Camera R, allowing us to rotate the rig incrementally and asynchronously record segments of a parallax-free image sphere. Meanwhile, “Camera L” was offset at an approximate interpupillary distance to intentionally generate parallax for the final stereo-spherical stitch. This parallax will be refined in post-production with compositing tools, later to be composited with CGI Dreadnoughtus renders.

2

This camera rig was wired up to a 4K recording station that allowed us to monitor and balance the cameras in real-time. Project adviser Nick Jushchyshyn rigged this cabinet together for this shoot, sacrificing the nuts and bolts of many an unused storage rack in the process.  Below is an early version of our camera rig, before Nick built a suit of travel-proof armor for it:

20160128_143725

Due to the limited low-light capabilities of the Blackmagic cameras, we also brought along an abundance of lighting equipment: including three Arri kits, a Matthews kit of flags and diffusers, blue gels and clamps, seven C-stands, and a portable LED panel. Other equipment included a spherical DSLR rig for HDRI capture, an H4N audio recorder paired with condenser shotgun mics, a wireless lav mic, a Canon T3i for set photography, a locked off FS100 camcorder, and many sandbags to pin it all down.

20160307_162519

1

11

14

17

I’d like to thank all of the wonderful volunteers and collaborators who donated their time and energies to make this production possible. If you found this post interesting, please subscribe for more updates to this project… including, “How to Resurrect a Dinosaur in About a Thousand Not-Entirely-Easy Steps (working title).”

received_10154017132030559

Spoilers.

 

 

Spherical rendering: Initial tests

Standard

I had originally planned on having a fully rendered, stereoscopic turntable ready for this week’s check-in, but I grossly underestimated how long it would take to render one. I left it running overnight, but I’m still on frame 81 out of 300 — technically double that, considering I’m rendering from two cameras at once.

Here is a single frame of what’s currently getting rendered (Left eye):
dreadStereo_compTest

 

I was able to get Mental Ray working, but VRay for Maya still isn’t cooperating with the DomeMaster plug-in. According to the plug-in’s Github repository, It should work if I run it through VRay’s Standalone / command line renderer. I’ll aim to have a test of that done for next week, in addition to getting my mental ray renders polished and working.

VRay Standalone DomeMaster Render via https://github.com/zicher3d-org/domemaster-stereo-shader

This week’s renders will be viewable on the GearVR by formatting them in a top-down configuration and putting the finished .MP4 files in the root/Oculus/360Videos folder of my Galaxy Note 4. The built-in Oculus 360Video viewer plays videos as Monoscopic by default, so I need to rename the video files as “namingConventions_TB.mp4” to tell the viewer to split the files horizontally and playback as stereo. The aspect ratio of each eye will be compressed to 4:1 (so, 1920 x 216). When I test VRay next week I’ll try to render in at least 2K, to test the limits of my phone’s playback.

Top/Bottom stereo configuration for latlong videos via http://elevr.com/cg-vr-2/

The creator of the DomeMaster plugin also has a ton of really interesting dome/vr production tools on his downloads page. Some are free and some are paid, but they might be worthwhile purchases for the department if Drexel goes forward with creating a VR animation class, like Nick has mentioned.

A few noteworthy examples:

Playblast VR for Maya: Playblast in a dome / stereo VR format. Different from the Domemaster in that it renders nurbs, scene elements, etc — just like a playblast. Immersive animation breakdowns sound neat!

Dome2rect: Advertised as a turnkey way to make rectangular trailers from VR/Dome dome imagery.

Domemaster Photoshop Actions: Solves the problem of quickly converting between Dome and VR formats.

Fusion VR Macros: VR Compositing presets for Blackmagic Fusion. Might be worth looking at for workflow logic when moving to Nuke.

Powerpoint Dome: Even slideshows are getting on on VR!

So, to conclude…

Goals for Next Week:

  • Mental Ray VR Turntable
  • VRay VR Turntable
  • Stitch & stereo convert background plates
  • Research for the following week: compositing stereo-spherical video in Nuke

Stereo-Spherical Rendering Research

Standard

Week #2 Goals: 

  • Write “Dreadnoughtus Experience” script (v01)
  • Review monoscopic Montana spheres
  • Research mono-to-stereo conversion
  • Research 360-degree stereo rendering solution
  • PRODUCTION: Build Patagonia scene
  • TESTING: Test stereo renders on GearVR and Cardboard

“Dreadnought” Script Progress

With the goals for this term laid out, it’s time to begin putting serious thought into the narrative structure of my thesis. I’m aiming for a short, 2-4 minute documentary-style experience guided by narration from Dr. Ken Lacovara. I’ve finished the first draft of the script, complete with awful placeholder narration! (I’ve established a monthly meeting time with Dr. Lacovara, who will assist in writing his dialogue.)

WIP Script via www.celtx.com

The narrative of “The Dreadnoughtus Experience” is broken into four locations: The Academy of Natural Sciences’ Dinosaur Hall, a Patagonian desert, a fully-digital virtual scale comparison environment, and Logan Square (outside the Academy). Each location has a range of challenges associated with it, but should all rely on a similar pipeline for capturing, rendering, stitching, and compositing. Let’s find out what that entails!


Monoscopic Sphere Conversion

My research partner Emma Fowler recently captured a small library of 360-degree photo spheres during a trip to Central Montana. Due to the landscape’s similarity with Dread’s excavation site in Argentina, we will use these images as the base plates for the Patagonian Desert scene. Emma shot the photo spheres with a Canon 1D Mark 1 camera, a tripod and a Nodal Ninja 3, an attachment that allows for incremental rotation around a lens’ convergence node. My Adviser Nick J. then ran the photos through Kolor Autopano, resulting in a high-quality, seamless, 360-degree panorama.

Montana Panorama via Emma Fowler

Montana Panorama via Emma Fowler

Next, the image must be converted to stereo by masking out objects of different depths and projecting them onto proxy geometry. For this section I will be following the fxguide “Art of Stereo Conversion” article as reference. The article is from 2012 and doesn’t cover a spherical workflow, but it breaks down all steps of the VFX pipeline when working with stereoscopic imagery and will be a great reference. Stereo conversion involves isolating elements at different depths, projecting them onto proxy geometry, and rendering them from a stereo camera rig. My background plate is a still image (and therefore much simpler than the examples in the article,) but I’ll still need to figure out a workflow for compositing on a warped spherical background. .

Stereo conversion and projection in Nuke via http://www.fxguide.com/featured/art-of-stereo-conversion-2d-to-3d-2012/

The scene will then be rebuilt in Maya, using measurement data to accurately recreate the background on which to animate CG elements. Emma shot the spheres with 3-step exposure bracketing, so I can produce an HDR sphere to match the lighting to a passable degree. After finishing the scene and animating the moving elements, the next challenge will be rendering the scene in stereo.


360-Degree Stereo Rendering

I’m planning on rendering this project in VRay 3.1 for Maya, which is sadly one update away from having built-in VR support. VRay 3.2 for 3DS Max can produce stereo-spherical imagery formatted for the Samsung GearVR and Oculus Rift DK2, but I’ll have to find another solution.

So near, yet so far… via http://www.chaosgroup.com/en/2/news.html

I recently discovered an exceptionally thorough series of tutorial videos from EleVR, a research group dedicated to testing VR methods. This series does an excellent job explaining the unique challenges associated with spherical stereoscopy, and even includes helpful animated gifs! (I will absolutely be using / citing these in my future presentations. 🙂 )

Via http://elevr.com/cg-vr-1/ :

Monoscopic Spherical Imagery (one eye looking in all directions)

Incorrect Stereoscopic Spherical Imagery (two eyes rotating around in their sockets)

Correct Stereoscopic Spherical Imagery (A camera rig emulating two eyes with one convergence point)

The article goes on to explain that there are two main methods configuring the cameras in a stereo rig, Converged and Parallel, with Off-Axis configuration “splitting the difference.” A presentation from www.sky.com entitled “Basic Principles of Stereoscopic 3D” excellently explains the pros and cons of these methods (as well as defining a glossary of relevant terms).  Because this blog is long enough as-is, I’ll just go ahead and say that Off-Axis is generally the best bet for comfortable spherical stereo. I’ll elaborate more in my thesis!

Off-Axis Stereo Configuration via http://elevr.com/cg-vr-1/

Once the camera rig is good to go, it’s time to start rendering a stereo image. This is a tricky process due to the omni-directional nature of spherical video. Rotating the entire camera rig around a central node means that for each still frame, neither camera will have a fixed position. It’s important to capture information from all 360 degrees of rotation, otherwise the stereoscopic offsets will converge when a user looks to the side.

So, how can you create a single spherical image out of a moving camera?

Methods for producing stereoscopic panoramas have existed for quite some time– just look at this paper from Paul Bourke, circa 2002. Through a process called “strip rendering” or “roundshot,” each camera in the stereo rig renders a 1-degree vertical slice while rotating around the rig’s central node. These slices are then stitched together into a stereoscopic set of image spheres.

Strip Rendering via http://paulbourke.net/stereographics/stereopanoramic/

This process is, sadly, time consuming and labor intensive. It works well for still imagery, but would be an insane undertaking for the suggested 60 frames per second of low-latency VR video. Thankfully, we can just use the incredibly helpful Domemaster 3D Shader by Andrew Hazelden. The Domemaster plugin works with Maya and Mental Ray, and has output options for different stereo configurations and spherical transform formats. According to the shader’s Github repository, it also supports Mental Ray, Arnold, and (thankfully) VRay for Maya. There’s even a convenient shelf! 

This is the current collection of Domemaster3D V1.6 Shaders.

Assuming that the shader works with the VRay licenses I have available, the Domemaster plugin is the most viable solution for rendering stereoscopic spherical video at a speed reasonable for a VR animation pipeline.  The EleVR Maya tutorial details out the steps, and although they use Mental Ray I expect similar results for my tests this week.

Domemaster tutorial via http://elevr.com/cg-vr-2/

…Getting this all to work on Drexel’s render farm will be a story for another day.


Testing & Implementation 

I’m confident in the validity of my research, but I still need to test my assumptions. I’ll write a follow-up post with the results of my tests in the near future!

DIGM Workshops: Week #1 Overview

Standard

This series of blog posts will be completed weekly for two of my graduate workshop classes, DIGM620 and DIGM599i. Both of these courses have the same goal: to produce a 10-week project that aids in the development of my thesis, a virtual reality animation of the titanosaur Dreadnoughtus schrani.

Week #1 Goals: 

  • Meet with production team to discuss overall roles for Fall term.
  • Develop a 10-week project that fulfills the scope of this class.
  • Begin production work for the aforementioned project.
  • Meet with Nick J to determine what thesis work will count towards his independent study.
  • Anatomy / committee meeting with Dave Mauriello.
  • Write the script!
  • Figure out what exactly I’m allowed to post on this blog.

Week #1 Notes:
I spent the majority of this week meeting with professors, team members, and potential committee members to figure out what’s going on this quarter. Simply put, “Project Dreadnought” is a huge thesis project — in terms of both mass and project scope. I’ve spent the past 6 months working with a small team of artists to model assets, test simulation and rigging pipelines, and reconstruct the skeleton of Dreadnoughtus via 3D scanning, photogrammetry, and ZBrush sculpting. Simultaneously, My adviser Nick J and I have been working to develop a pipeline for easily (and reliably) rendering spherical, stereoscopic video content optimized for mobile VR.

There are a lot of plates spinning, but this project has the potential to blow out of scope quickly. That’s why I’m pouring every available resource into production. I’m taking a Digital Media Workshop class while simultaneously doing an independent study / production class with Nick J, in an effort to double my time.  Essentially, I’ll spend the next 10 weeks managing two large chunks of my thesis, overseeing two other artists, and trying to keep my head from rolling off my shoulders. This blog is where I get to document all that!

DIGM 620 Overview:
After spending the entire summer modeling the bones of Dreadnoughtus and articulating the skeleton, now I get to rig it. I’ll be using my time in this class to coordinate the creation of the Dreadnoughtus rig and manage the other artists on my team: Emma Folwer (Senior Project / Paleontology), Brandon Percia (Sophomore / Animation), and Zack Thomas (Former Co-Op / Animation).  Dave Mauriello is advising on this portion of my thesis, and this week I met with him to discuss the bests method for rigging a titanosaur skeleton and simulating muscles. I’ll also be meeting with my committee member Dr. Ken Lacovara once per month to ensure that  While I’m working on the skeleton rig, Emma and Brandon will model and texture the animal’s musculature, following a softbody muscle simulation pipeline I established in my Spring DIGM 540 Class. By the end of this quarter, we aim to have a rigged muscle system ready for skin simulation tests.

DIGM 599i Overview:
This class serves as a 10-week production study in 360-degree stereoscopic pipeline design. My work will include shooting spherical photo / video background plates, converting monoscopic photo spheres to stereo, researching render / stitching options, and seamlessly compositing CG animation for cinematic virtual reality. I’ll also be testing the differences between HMD VR workflow and spherical video optimized for Dome projection. I’ll also be using this time to finish pre-production for my thesis, reworking my proposal and developing a script, story-“circles” and layout tests.