In shape analysis and matching, it is often important to encode information about the relation between a given point and other points on a shape, namely its context. To this aim we propose a theoretically sound and efficient approach for the simulation of a discrete time evolution process that runs through all the possible geodesic paths between pairs of points on a surface represented as a triangle mesh in the discrete setting. We demonstrate how this construction can be used to efficiently construct a multiscale point descriptor, called Discrete time Evolution Process Descriptor, which robustly encodes the structure of geodesic neighborhoods of a point across multiple scales. Our work is closely related to the methods based on diffusion geometry, and derived signatures such as the HKS or the WKS, and provides information that is complementary to these descriptors. We demonstrate through extensive experimental evaluation that our descriptor shows promising results in both shape matching and shape segmentation scenarios. Our approach outperforms similar methods especially when comparing shapes undergoing large non-isometric deformations and in the presence of missing parts.
The depth resolution achieved by a continuous wave time-of-flight (C-ToF) imaging system is determined by the coding (modulation and demodulation) functions that it uses. Almost all current C-ToF systems use sinusoid or square coding functions, resulting in a limited depth resolution. In this paper, we present a mathematical framework for exploring and characterizing the space of C-ToF coding functions in a geometrically intuitive space. Using this framework, we design families of novel coding functions that are based on Hamiltonian cycles on hypercube graphs. Given a fixed total source power and acquisition time, the new Hamiltonian coding scheme can achieve up to an order of magnitude higher resolution as compared to the current state-of-the art methods, especially in low SNR settings. We also develop a comprehensive physically-motivated simulator for C-ToF cameras that can be used to evaluate various coding schemes prior to a real hardware implementation. Since most off-the-shelf C-ToF sensors use sinusoid or square functions, we develop a hardware prototype that can implement a wide range of coding functions. Using this prototype and our software simulator, we demonstrate the performance advantages of the proposed Hamiltonian coding functions in a wide range of imaging settings.
A conformal flattening maps a curved surface to the plane without distorting angles---such maps have become a fundamental building block for problems in geometry processing, numerical simulation, and computational design. Yet existing methods provide little direct control over the shape of the flattened domain, or else demand expensive nonlinear optimization. Boundary first flattening (BFF) is a linear method for conformal parameterization which is faster than traditional linear methods, yet provides control and quality comparable to sophisticated nonlinear schemes. The key insight is that boundary data can be efficiently constructed via the Cherrier formula together with a pair of Poincare-Steklov operators; once the boundary is known, the map is easily extended over the rest of the domain. Since computation demands only a single factorization of the real Laplacian, the amortized cost is about 50x less than any previous technique for boundary-controlled conformal flattening. BFF therefore opens the door to real-time editing or fast optimization of high-resolution maps, with direct control over boundary length or angle. We apply this method to maps with sharp corners, cone singularities, minimal area distortion, and uniformization; we also demonstrate for the first time how a surface can be conformally flattened directly onto any given target shape.
We present a new local descriptor for 3D shapes, directly applicable to a wide range of shape analysis problems such as point correspondences, semantic segmentation, affordance prediction, and shape-to-scan matching. The descriptor is produced by a convolutional network that is trained to embed geometrically and semantically similar points close to one another in descriptor space. The network processes surface neighborhoods around points on a shape that are captured at multiple scales by a succession of progressively zoomed out views, taken from carefully selected camera positions. We leverage two extremely large sources of data to train our network. First, since our network processes rendered views in the form of 2D images, we repurpose architectures pre-trained on massive image datasets. Second, we automatically generate a synthetic dense correspondence dataset by part-aware, non-rigid alignment of a massive collection of 3D models. As a result of these design choices, our network effectively encodes multi-scale local context and fine-grained surface detail. We demonstrate through several experiments that our learned local descriptors are more discriminative compared to state of the art alternatives, and are effective in a variety of shape analysis applications.
We present the first technique to create wide-angle, high-resolution looping panoramic videos. Starting with a 2D grid of registered videos acquired on a robotic mount, we formulate a combinatorial optimization to determine for each output pixel the source video and looping parameters that jointly maximize spatiotemporal consistency. This optimization is accelerated by reducing the set of source labels using a graph-coloring scheme. We parallelize the computation and implement it out-of-core by partitioning the domain along low-importance paths. The merged panorama is assembled using gradient-domain blending and stored as a hierarchy of video tiles. Finally, an interactive viewer adaptively preloads these tiles for responsive browsing and allows the user to interactively edit and improve local regions. We demonstrate these techniques on gigapixel-sized looping panoramas.
We present an integral framework for training sketch simplification networks that convert challenging rough sketches into clean line drawings. Our approach augments a simplification network with a discriminator network, training both networks jointly so that the discriminator network discerns whether a line drawing is a real training data or the output of the simplification network, which in turn tries to fool it. This approach not only encourages the output sketches to be more similar in appearance to the training sketches, but allows training with additional unsupervised data. By training with additional rough sketches and line drawings that are not corresponding to each other, we can improve the quality of the sketch simplification. Our models that significantly outperform the state of the art in the sketch simplification task, and show we can also optimize for a single image, which improves accuracy at the cost of additional computation time. Using the same framework, it is possible to train the network to perform pencil drawing generation, which is not possible using the standard mean squared error loss. We validate our framework with two user tests, where our approach is preferred to the state of the art in sketch simplification 88.9% of the time.
We propose an algorithm for generating novel 3D tree model variations from existing ones via geometric and structural blending. Our approach is to treat botanical trees as elements of a tree-shape space equipped with a proper metric that quantifies geometric and structural deformations. Geodesics, or shortest paths under the metric, between two points in the tree-shape space correspond to optimal deformations that align one tree onto another, including the possibility of growing, adding or removing branches and parts. Central to our approach is a mechanism for computing correspondences between trees that have different structures and different number of branches. The ability to compute geodesics and their lengths enables us to compute continuous blending between botanical trees, which in turn facilitates statistical analysis such as the computation of averages of tree structures. We show a variety of 3D tree models generated with our approach from 3D trees exhibiting complex geometric and structural differences. We also demonstrate the application of the framework in reflection symmetry analysis and symmetrization of botanical trees.






















