SCENE: Computational videography applied to visual content production
|The session focuses on the developed technologies to flexibly combine real and computer generated imagery for film and TV production, how it could change the workflows (or the relation between shooting and processing) and why those changes could be significant both for content producers as for researchers.
|Camera Technology: The RGB+Z Motion SCENE Camera |
This presentation describes the concept behind the Motion SCENE Camera (MSC), which is a specially adapted camera based on the well-known ARRI ALEXA Studio. The new MSC is enabled for depth map capture, yielding an RGB-Z representation that is captured synchronously from the same perspective.
|Workflow: The SCENE Representation Architecture
Computational photography has fundamentally changed the relation between the shooting and processing of imagery. Due to the more complex production workflow, those changes will be even more significant in computational videography. This presentation introduces a new way of structuring and storing video and CGI content, enabling more efficient and higher quality workflows.
|Production and postproduction tools: Next Generation Postproduction|
A set of advanced algorithms for scene analysis and spatio-temporal consistency, which rely on the aforementioned SCENE Representation Architecture, progresses the connection between image-based (textures, depth etc.) and object-based formats (meshes, voxels etc.). How can these methods help to solve tasks like relighting a scene, depth-of-field manipulation, virtual costume changes and others without labor-intensive manual work in the image domain?
|Use of Real Time RGB+Z Capture in Broadcast Virtual Sets
Implementation of a system capable of capturing and using an extra depth channel in real time. Utilization of this technology for render, effects and talent interaction with virtual objects. Results obtained at the present time and expected future improvements of this technology.
- Johannes Steurer, Principal Engineer R&D at Arnold & Richter Cine Technik (ARRI), Germany
- Christopher Haccius, Video and Model-Based Coding researcher, Intel Visual computing Institute, Germany
- Adrian Hilton, Director of the Centre for Vision Speech and Signal Processing, University of Surrey UK
- Sammy Rogmans, Real-time multiview video researcher and project manager at iMinds, Belgium
For more information: