Ray tracing is a computationally intensive rendering technique traditionally used in offline high-quality rendering. Powerful hardware accelerators have been recently developed which put real-time ray tracing even in the reach of mobile devices. However, rendering animated scenes remains difficult, as updating the acceleration trees for each frame is a memory-intensive process. This article proposes MergeTree, the first hardware architecture for Hierarchial Linear Bounding Volume Hierarchy (HLBVH) construction, designed to minimize memory traffic. For evaluation, the hardware constructor is synthesized on a 28nm CMOS process technology. Compared to a state-of-the-art binned SAH builder, the present work speeds up construction by a factor of 5, reduces build energy by a factor of 3.2, and memory traffic by a factor of 3. A software HLBVH builder on GPU requires 3.3 times more memory traffic. In order to take tree quality into account, a rendering accelerator is modeled alongside the builder. Given the use of a toplevel build to improve tree quality, the proposed builder reduces system energy per frame by an average 41% with primary rays and 13% with diffuse rays. In large (> 500K triangles) scenes, the difference is more pronounced, 62% and 35%, respectively.
A conformal flattening maps a curved surface to the plane without distorting angles---such maps have become a fundamental building block for problems in geometry processing, numerical simulation, and computational design. Yet existing methods provide little direct control over the shape of the flattened domain, or else demand expensive nonlinear optimization. Boundary first flattening (BFF) is a linear method for conformal parameterization which is faster than traditional linear methods, yet provides control and quality comparable to sophisticated nonlinear schemes. The key insight is that boundary data can be efficiently constructed via the Cherrier formula together with a pair of Poincare-Steklov operators; once the boundary is known, the map is easily extended over the rest of the domain. Since computation demands only a single factorization of the real Laplacian, the amortized cost is about 50x less than any previous technique for boundary-controlled conformal flattening. BFF therefore opens the door to real-time editing or fast optimization of high-resolution maps, with direct control over boundary length or angle. We apply this method to maps with sharp corners, cone singularities, minimal area distortion, and uniformization; we also demonstrate for the first time how a surface can be conformally flattened directly onto any given target shape.
Sampling is a core component of many applications including: imaging, rendering, geometry processing and visualization. Prior research has primarily focused on blue noise sampling with a single class of samples. Limited research has been done on multi-class sampling. Multi-class blue noise sampling is still a challenging problem when the density distribution of every class is non-uniform and different from each other. In this paper, we present a Wasserstein blue noise sampling algorithm for multi-class sampling by throwing samples as the Wasserstein barycenter of multiple density distributions. We employ a more general representation of the optimal transport problem to break up the partition necessary in other optimal sampling. Moreover, an adaptive blue noise distribution property for every individual class is guaranteed, as well as their combined class. The sampling efficiency is also improved by applying the Wasserstein distance with entropic constraints. Our method can be applied to multi-class sampling on the point set surface. We also demonstrate applications in object distribution and color stippling.
We study Markov Chain Monte Carlo (MCMC) methods operating in primary sample space and their interactions with multiple sampling techniques. We observe that incorporating the sampling technique into the state of the Markov Chain, as done in Multiplexed Metropolis Light Transport (MMLT), impedes the ability of the chain to properly explore path space, as transitions between sampling techniques lead to disruptive alterations of path samples. To address this issue, we reformulate Multiplexed MLT in the Reversible Jump MCMC framework (RJMCMC) and introduce inverse sampling techniques that turn light paths into the random numbers that produce them. This allows us to formulate a novel perturbation that can locally transition between sampling techniques, and we derive the correct acceptance probability using RJMCMC. We investigate how to generalize this concept to non-invertible sampling techniques commonly found in practice, and introduce probabilistic inverses that extend our perturbation to cover most sampling methods found in light transport simulations. Our theory reconciles the inverses with RJMCMC yielding an unbiased algorithm, which we call Reversible Jump MLT (RJMLT). We verify the correctness of our implementation in canonical and practical scenarios and demonstrate improved temporal coherence, decrease in structured artifacts, and faster convergence on a variety of scenes.
We present a fast, novel image-based technique, for reverse engineering the Bidirectional Texture Function (BTF) of woven fabrics. In order to recover our models, we estimate a depth map and a set of yarn parameters (yarn width, yarn crossovers and so on) from spatial and frequency domain cues. We solve for the woven fabric pattern, and from this build a volumetric data set. We use a combination of image space analysis, frequency domain analysis and in challenging cases match image statistics with those from previously captured known patterns. Our method determines, from a single digital image, the woven cloth structure, depth and albedo, thus removing the need for separately measured depth data. The focus of this work is on the rapid acquisition of woven cloth structure and therefore we use standard approaches to render the results. Our pipeline first estimates the weave pattern, yarn characteristics, albedo and noise statistics using a novel combination of low level image processing and Fourier Analysis. Next, we estimate a depth map for the fabric sample using a first order Markov chain and our estimated noise model as input. A volumetric BTF model is constructed from the recovered depth and albedo maps.
We present an integral framework for training sketch simplification networks that convert challenging rough sketches into clean line drawings. Our approach augments a simplification network with a discriminator network, training both networks jointly so that the discriminator network discerns whether a line drawing is a real training data or the output of the simplification network, which in turn tries to fool it. This approach not only encourages the output sketches to be more similar in appearance to the training sketches, but allows training with additional unsupervised data. By training with additional rough sketches and line drawings that are not corresponding to each other, we can improve the quality of the sketch simplification. Our models that significantly outperform the state of the art in the sketch simplification task, and show we can also optimize for a single image, which improves accuracy at the cost of additional computation time. Using the same framework, it is possible to train the network to perform pencil drawing generation, which is not possible using the standard mean squared error loss. We validate our framework with two user tests, where our approach is preferred to the state of the art in sketch simplification 88.9% of the time.
Many graphics and vision problems are naturally expressed as optimizations with either linear or non-linear least squares objective functions over visual data, such as images and meshes. The mathematical descriptions of these functions are extremely concise, but their implementation in real code is tedious, especially when optimized for real-time performance in interactive applications. We propose a new language, Opt in which a user simply writes energy functions over image- or graph-structured unknowns, and a compiler automatically generates state-of-the-art GPU optimization kernels. The end result is a system in which real-world energy functions in graphics and vision applications are expressible in tens of lines of code. They compile directly into highly-optimized GPU solver implementations with performance competitive with the best published hand-tuned, application-specific GPU solvers, and 1--2 orders of magnitude beyond a general-purpose auto-generated solver.
Implicitizing rational surfaces is a fundamental computational task in Algorithmic Algebraic Geometry. Although the resultant of three moving planes corresponding to a ¼-basis for a rational surface is guaranteed to contain the implicit equation of the surface as a factor, ¼-bases for rational surfaces are difficult to compute. Moreover, ¼-bases often have high degrees, so these resultants generally contain many extraneous factors. Here we develop fast algorithms to implicitize rational tensor product surfaces by computing the resultant of three moving planes corresponding to three syzygies with low degrees. These syzygies are easy to compute, and the resultants of the corresponding moving planes generally contain fewer extraneous factors than the resultants of the moving planes corresponding to ¼-bases. We predict and compute all the possible extraneous factors that may appear in these resultants. Examples are provided to clarify and illuminate the theory.