The process of aligning a pair of shapes is a fundamental operation in computer graphics. Traditional alignment methods rely heavily on matching corresponding points or features, a paradigm that falters when significant shape portions are missing. In this paper, we present a novel alignment technique between two 2D shapes that is robust to shape incompleteness. We take an unsupervised learning approach, and train a neural network on the task of shape alignment using pairs of shapes for self-supervision. Our network, called FFDnet, learns the space of warping transformations between two shapes by performing a free-form deformation on a source shape such that it aligns well with a potentially geometrically distinct partial target shape. With this extensive training, we aim for the network to form a specialized expertise over the common characteristics of the shapes in each dataset, supporting a higher-level understanding of the expected shape space that a local approach is oblivious to. Specifically, the network is trained to warp complete source shapes to incomplete targets generated from full shapes, as if the target shapes were complete, thus essentially rendering the alignment partial-shape agnostic. We constrain FFDnet through an anisotropic total variation identity regularization to promote piecewise smooth deformation fields,
Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, we propose a method that is able to compute a rich and realistic 3D model of a human body and its outfits from a single photograph with little human interaction. Our algorithm is not only able to capture the global shape and geometry of the clothing, it can also extract small but important details of cloth, such as occluded wrinkles and folds. Unlike previous methods using full 3D information (i.e. depth, multi-view images, or sampled 3D geometry), our approach achieves detailed garment recovery from a single-view image by using statistical, geometric, and physical priors and a combination of parameter estimation, semantic parsing, shape recovery, and physics-based cloth simulation. We demonstrate the effectiveness of our algorithm by re-purposing the reconstructed garments for virtual try-on and garment transfer applications, as well as cloth animation for digital characters.
We present an integrated approach for reconstructing high-fidelity 3D models using consumer RGB-D cameras. RGB-D registration and reconstruction algorithms are prone to errors from scanning noise, making it hard to perform 3D reconstruction accurately. The key idea of our method is to assign a probabilistic uncertainty model to each depth measurement, which then guides the scan alignment and depth fusion. This allows us to effectively handle inherent noise and distortion in depth maps while keeping the overall scan registration procedure under the iterative closest point (ICP) frame-work for simplicity and efficiency. We further introduce a local-to-global, submap-based, and uncertainty-aware global pose optimization scheme to improve scalability and guarantee global model consistency. Finally, we have implemented the proposed algorithm on the GPU, achieving real-time 3D scanning frame rates and updating the reconstructed model on-the-fly. Experimental results on simulated and real-world data demonstrate that the proposed method outperforms state-of-the-art systems in terms of the accuracy of both recovered camera trajectories and reconstructed models.
Simulating (elastically) deformable models that can collide with each other and with the environment remains a challenging task. The resulting contact problems can be elegantly approached using Lagrange multipliers to represent the unknown magnitude of the response forces. Typical methods construct and solve a Linear Complementarity Problem (LCP) to obtain the response forces. This requires the inverse of the generalized mass matrix, which is in general hard to obtain for deformable-body problems. In this paper we tackle such contact problems by directly solving the Mixed Linear Complementarity Problem (MLCP) and omitting the construction of an LCP matrix. Since the MLCP is equivalent to a convex quadratic program subject to inequality constraints, we propose to use a Conjugate Residual (CR) solver as the backbone of our collision-response system. We also propose a simple yet efficient preconditioner that ensures faster convergence. Finally, our approach is faster than existing methods (at the same accuracy), and it allows accurate treatment of friction.
Synthesizing motions for legged characters in arbitrary environments is a long-standing problem that has recently received a lot of attention from the computer graphics community. We tackle this problem with a procedural approach that is generic, fully automatic and independent from motion capture data. The main contribution of this paper is a point-mass-model-based method to synthesize Center Of Mass trajectories. These trajectories are then used to generate the whole-body motion of the character. The use of a point mass model often results in physically inconsistent motions and joint limit violations. We mitigate these issues through the use of a novel formulation of the kinematic constraints which allows us to generate a quasi-static Center Of Mass trajectory, in a way that is both user-friendly and computationally efficient. We also show that the quasi-static constraint can be relaxed to generate motions usable for applications of computer graphics (on average 83% of a given trajectory remain physically consistent). Our method was integrated in an open-source contact planner and tested with different scenarios - some never addressed before- featuring legged characters performing motions in cluttered environments. The computational efficiency of our trajectory generation algorithm enables us to synthesize motions in a few seconds.
We exploit the permutation creation ability of genetic optimization to find the permutation of one point set that puts it into correspondence with another one. To this effect, we provide the first, to our knowledge, genetic algorithm for 3D shape correspondence, which is the main contribution of this paper. As another significant contribution, we present an adaptive sampling approach that relocates the matched points based on the currently available correspondence via an alternating optimization. The point sets to be matched are sampled from two isometric (or nearly isometric) shapes. The sparse one-to-one correspondence, i.e., bijection, that we produce is validated both in terms of timing and accuracy in a comprehensive evaluation suite that includes three standard shape benchmarks and state-of-the-art techniques.
Many strategies exist for optimizing non-linear distortion energies in geometry and physics applications, but devising an approach that achieves the convergence promised by Newton-type methods remains challenging. In order to guarantee the positive semi-definiteness required by these methods, a numerical eigendecomposition or approximate regularization is usually needed. In this paper, we present analytic expressions for the eigensystems at each quadrature point of a wide range of isotropic distortion energies. These systems can then be used to project energy Hessians to positive semi-definiteness analytically. Unlike previous attempts, our formulation provides compact expressions that are valid both in 2D and 3D, and does not introduce spurious degeneracies. At its core, our approach utilizes the invariants of the stretch tensor that arises from the polar decomposition of the deformation gradient. We provide closed-form expressions for the eigensystems for all these invariants, and use them to systematically derive the eigensystems of any isotropic energy. Our results are suitable for geometry optimization over flat surfaces or volumes, and agnostic to both the choice of discretization and basis function. To demonstrate the efficiency of our approach, we include comparisons against existing methods on common graphics tasks such as surface parameterization and volume deformation.
This article presents an iterative backward-warping technique and its applications. It predictively synthesizes depth buffers for novel views. Our solution is based on the fixed-point iteration that converges quickly in practice. Unlike this previous technique, our solution is a purely backward warping without using bidirectional sources. To efficiently seed the iterative process, we also propose a tight bounding method for motion vectors. Non-convergent depth holes are inpainted via deep depth buffers. Our solution works well with arbitrarily distributed motion vectors under moderate motions. Many scenarios can benefit from our depth warping. As an application, we propose a highly scalable image-based occlusion-culling technique, achieving a significant speedup compared to the state of the art. We also demonstrate the benefit of our solution in multi-view soft-shadow generation.
A large number of imaging and computer graphics applications require localized information on the visibility of image distortions. Existing image quality metrics are not suitable for this task as they provide a single quality value per image. Existing visibility metrics produce visual difference maps, and are specifically designed for detecting just noticeable distortions but their predictions are often inaccurate. In this work, we argue that the key reason for this problem is the lack of large image collections with a good coverage of possible distortions that occur in different applications. To address the problem, we collect an extensive dataset of reference and distorted image pairs together with user markings indicating whether distortions are visible or not. We propose a statistical model that is designed for the meaningful interpretation of such data, which is affected by visual search and imprecision of manual marking. We use our dataset for training existing metrics and we demonstrate that their performance significantly improves. We show that our dataset with the proposed statistical model can be used to train a new CNN-based metric, which outperforms the existing solutions. We demonstrate the utility of such a metric in visually lossless JPEG compression, super-resolution and watermarking.
Subtractive manufacturing technologies, such as 3-axis milling, add a useful tool to the digital manufacturing arsenal. However, each milling pass using such machines can only carve a single height-field, defined with respect to the machine tray. We enable fabrication of general shapes using 3-axis milling, providing a robust algorithm to decompose any shape into a few height-field blocks. Such blocks can be manufactured with a single milling pass and then assembled to form the target geometry. Computing such decompositions requires solving a complex discrete optimization problem, variants of which are known to be NP-hard. We propose a two-step process, based on the observation that if the height-field directions are constrained to the major axes we can guarantee a valid decomposition starting from a suitable surface segmentation. Our method first produces a compact set of large, possibly overlapping, height-field blocks that jointly cover the model surface. We then compute an overlap-free decomposition via a combination of cycle elimination and topological sorting on a graph. The algorithm produces a compact set of height-field blocks that jointly describe the input model and satisfy all manufacturing constraints. We demonstrate our method on a range of inputs, and showcase several real life models manufactured using our technique.
The computational cost for creating realistic fluid animations by numerical simulation is generally expensive. In digital production environments, existing precomputed fluid animations are often reused for different scenes in order to reduce the cost of creating scenes containing fluids. However, applying the same animation to different scenes often produces unacceptable results, so the animation needs to be edited. In order to help animators with the editing process, we develop a novel method for synthesizing the desired flow fields by combining existing flow field data. Our system allows the user to place flow fields at desired positions, and combine them by interpolation at the boundaries between these flow fields, to synthesize a new flow field. The interpolation of the flow fields is realized by minimizing an energy function, which is designed to satisfy the incompressible Navier-Stokes equations. Our method focuses on smoke simulations defined on a uniform grid. We demonstrate the potential of our method by showing a set of examples, including a large-scale sandstorm created from a few flow fields simulated in a small-scale space.
In this paper, we introduce a surface reconstruction method that can perform gracefully with non-uniformly-distributed, noisy, and even sparse data. We reconstruct the surface by estimating an implicit function and then obtain a triangle mesh by extracting an iso-surface from it. Our implicit function takes advantage of both the indicator function and the signed distance function. It is dominated by the indicator function at the regions away from the surface and approximates the signed distance function near the surface. On one hand, it is well defined over the entire space so that the extracted iso-surface must lie near the underlying true surface and is free of spurious sheets. On the other hand, thanks to the nice properties of the signed distance function, a smooth iso-surface can be extracted using the approach of marching cubes with simple linear interpolations. More importantly, our implicit function can be estimated directly from an explicit integral formula without solving any linear system. This direct approach leads to a simple, accurate and robust reconstruction method, which can be paralleled with little overhead. We apply our method to both synthetic and real-world scanned data and demonstrate the accuracy, robustness and efficiency of our method.
Everyone uses the sense of touch to explore the world, and roughness is one of the most important qualities in tactile perception. Roughness is a major identifier for judgments of material composition, comfort and friction, and it is tied closely to manual dexterity. The advent of high-resolution 3D printing technology provides the ability to fabricate arbitrary 3D textures with surface geometry that confers haptic properties. In this work, we address the problem of mapping object geometry to tactile roughness. We fabricate a set of carefully designed stimuli and use them in experiments with human subjects to build a perceptual space for roughness. We then match this space to a quantitative model obtained from strain fields derived from elasticity simulations of the human skin contacting the texture geometry, drawing from past research in neuroscience and psychophysics. We demonstrate how this model can be applied to predict and alter surface roughness, and we show several applications in the context of fabrication.