Being able to see beyond the direct line of sight is an intriguing prospective and could benefit a wide variety of important applications. Recent work has demonstrated that time-resolved measurements of indirect diffuse light contain valuable information for reconstructing shape and reflectance properties of objects located around a corner. In this paper, we introduce a novel reconstruction scheme that, by design, produces solutions that are consistent with state-of-the-art physically-based rendering. Our method combines an efficient forward model (a custom renderer for time-resolved three-bounce indirect light transport) with an optimization framework to reconstruct object geometry in an analysis-by-synthesis sense. We evaluate our algorithm on a variety of synthetic and experimental input data, and show that it gracefully handles uncooperative scenes with high levels of noise or non-diffuse material reflectance.
Seamless global parametrization of surfaces is a key operation in geometry processing, e.g. for high-quality quad mesh generation. A common approach is to prescribe the parametric domain structure, in particular the locations of singularities (cones), and solve a non-convex optimization problem minimizing a distortion measure, with local injectivity imposed through constraints or barrier terms. In both cases, an initial valid parametrization is essential to serve as feasible starting point for obtaining an optimized solution. While convexified versions of the constraints eliminate this initialization requirement, they narrow the range of solutions, causing some problem instances that actually do have a solution to become infeasible. We demonstrate that for arbitrary given sets of topologically admissible parametric cones with prescribed curvature, a global seamless parametrization always exists (with the exception of one well-known case). Importantly, our proof is constructive and directly leads to a general algorithm for computing such parametrizations. Most distinctively, this algorithm is bootstrapped with a convex optimization problem (solving for a conformal map), in tandem with a simple linear equation system (determining a seamless modification of this map). This map can then serve as valid starting point and be optimized with respect to specific distortion measures using injectivity preserving methods.
We present a novel linear subdivision scheme for face-based tangent directional fields on triangle meshes. Our subdivision scheme is based on a novel coordinate-free representation of directional fields as halfedge-based scalar quantities, bridging the finite-element representation with discrete exterior calculus. By commuting with differential operators, our subdivision is structure-preserving: it reproduces curl-free fields precisely, and reproduces divergence-free fields in the weak sense. Moreover, our subdivision scheme directly extends to directional fields with several vectors per face by working on the branched covering space. Finally, we demonstrate how our scheme can be applied to directional-field design, advection, and robust earth mover's distance computation, for efficient and robust computation.
Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this paper we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.
In this work, we investigate a simple, low-cost, and compact optical coding camera design that supports high resolution image reconstructions from raw measurements with low pixel counts. Our method uses an end-to-end framework to simultaneously optimize the optical design and a reconstruction network obtaining for super-resolved images from raw measurements. The optical design space is that of an engineered point spread function (implemented with diffractive optics), which can be considered an optimized anti-aliasing filter to preserve as much high resolution information as as possible despite imaging with a low pixel count, low fill-factor SPAD array. We further investigate a deep network for reconstruction. The effectiveness of this joint design and reconstruction approach is demonstrated for a range of different applications, including high speed imaging, and time of flight depth imaging, as well as transient imaging. While our work specifically focuses on low-resolution SPAD sensors, similar approaches should prove effective for other emerging image sensor technologies with low pixel counts and low fill-factors.
Various applications of cross field design on surfaces benefit from alignment of these fields to sharp surface features. We introduce a new class of cross field energies built from spherical harmonic functions that exhibit natural alignment to salient geometric features. Our approach is based on theoretical analysis of a class of energies that are sensitive to sharp creases on the surface but are intrinsic on smooth regions away from these creases. We leverage this pair of properties to generate feature-aligned cross fields and demonstrate their applicability to quad meshing and polycube generation.
We present an algorithm to generate digital painting lighting effects from a single image. Our algorithm is based on an observation: artists use many overlapping strokes to paint lighting effects, \ie, pixels with dense stroke history tend to gather more illumination strokes. Based on this observation, we design an algorithm to both estimate the density of strokes in a digital painting using color geometry, and then generate novel lighting effects via mimicking artists' coarse-to-fine workflow. Coarse lighting effects are first generated using a wave transform, and then retouched according to the stroke density of the original illustrations into usable lighting effects. Our algorithm is content-aware, with the generated lighting effects naturally adapting to the image structure, and can be used as a interactive tool to simplify the current labor-intensive workflow for generating lighting effects for digital and matte paintings. In addition, our algorithm can also produce usable lighting effects for photographies or 3D rendered images. We evaluate our approach with both an in-depth qualitative and a quantitative analysis which includes a perceptual user study. Results show that our proposed approach is to produce favorable lighting effects with respect to existing approaches.
Deep Iterative Frame Interpolation for Full-frame Video Stabilization
We introduce an efficient method for designing shell reinforcements of minimal weight. Inspired by classical Michell trusses, we create a reinforcement layout whose members are aligned with optimal stress directions, then optimize their shape minimizing the volume while keeping stresses bounded. We exploit two predominant techniques for reinforcing shells: adding ribs aligned with stress directions and using thicker walls on regions of high stress. Most previous work can generate either only ribs or only variable-thickness walls. However, in the general case, neither approach by itself will provide optimal solutions. By using a more precise volume model, our method is capable of producing optimized structures with the full range of qualitative behaviors: from ribs to walls, and smoothly transitioning in between. Our method includes new algorithms for determining the layout of reinforcement structure elements, and an efficient algorithm to optimize their shape, minimizing a non-linear non-convex functional at a fraction of the cost and with better optimality compared to standard solvers. We demonstrate the optimization results for a variety of shapes, and the improvements it yields in the strength of 3D-printed objects.
Monte Carlo (MC) path tracing suffers from serious noise. Two common solutions are noise filtering that generates smooth but biased results, or sampling with a carefully crafted probability distribution function (PDF) to produce unbiased results. Both solutions benefit from efficient incident radiance field sampling and reconstruction algorithm. We propose new deep learning based approaches, Q- and R-networks, that adaptively sample and reconstruct incident radiance fields. We first propose to use a CNN-based image-direction reconstruction network (R-network) that simultaneously utilizes coherence on both the incident radiance-field space and image space. In addition, we use deep reinforcement learning (Q-network) that adaptively chooses the best action between increasing the resolution of the radiance field or doubling the sampling density on the radiance field, to maximize the reward in terms of reaching the ground-truth result. To verify benefits of our approach, we first test our method against our main application, the unbias the unbiased high-quality rendering by synthesizing PDF to guide MC sampling, to show robust improvement over the state-of-the-art methods. Additionally, we also test our method against biased applications, including preview and irradiance caching, and observe that our method achieves comparable performance with the state-of-the-art methods in the biased rendering applications.
Field-guided parametrization methods have proven effective for quad meshing of surfaces; these methods compute smooth cross fields to guide the meshing process and then integrate the fields to construct a discrete mesh. A key challenge in extending these methods to three dimensions, however, is representation of field values. Whereas cross fields can be represented by tangent vector fields that form a linear space, the 3D analog---an octahedral frame field---takes values in a nonlinear manifold. In this work, we describe the space of octahedral frames in the language of differential and algebraic geometry. With this understanding, we develop geometry-aware tools for optimization of octahedral fields, namely geodesic stepping and exact projection via semidefinite relaxation. Our algebraic approach not only provides an elegant and mathematically-sound description of the space of octahedral frames but also suggests a generalization to frames whose three axes scale independently, better capturing the singular behavior we expect to see in volumetric frame fields. These new odeco frames, so-called as they are represented by orthogonally decomposable tensors, also admit a semidefinite program--based projection operator. Our description of the spaces of octahedral and odeco frames suggests computing frame fields via manifold-based optimization algorithms; we show that these algorithms efficiently produce high-quality fields while maintaining stability and smoothness.