ACM DL

ACM Transactions on

Graphics (TOG)

Menu
Latest Articles

Poly-Spline Finite-Element Method

We introduce an integrated meshing and finite-element method pipeline enabling solution of partial differential equations in the volume enclosed by a boundary representation. We construct a hybrid hexahedral-dominant mesh, which contains a small number of star-shaped polyhedra, and build a set of high-order bases on its elements, combining... (more)

Video Extrapolation Using Neighboring Frames

With the popularity of immersive display systems that fill the viewer’s field of view (FOV) entirely, demand for wide FOV content has increased. A video extrapolation technique based on reuse of existing videos is one of the most efficient ways to produce wide FOV content. Extrapolating a video poses a great challenge, however, due to the... (more)

Redefining A in RGBA: Towards a Standard for Graphical 3D Printing

Advances in multimaterial 3D printing have the potential to reproduce various visual appearance attributes of an object in addition to its shape. Since many existing 3D file formats encode color and translucency by RGBA textures mapped to 3D shapes, RGBA information is particularly important for practical applications. In contrast to color (encoded... (more)

Non-line-of-sight Imaging with Partial Occluders and Surface Normals

Imaging objects obscured by occluders is a significant challenge for many applications. A camera that could “see around corners” could... (more)

A Unified Framework for Compression and Compressed Sensing of Light Fields and Light Field Videos

In this article we present a novel dictionary learning framework designed for compression and... (more)

NEWS

New Submission Requirements

As of October 2018, ACM TOG requires submissions for review to be anonymous. See the Author Guidelines for details.  

About TOG

ACM TOG is the foremost peer-reviewed journal in the area of computer graphics. 

Recent impact factor calculations from Thomson Reuters give ACM TOG an impact factor of 4.096 and an Eigenfactor Score of 0.029, giving it the top ranking among the 104 journals in the Computer Science: Software Engineering category. 

read more
Forthcoming Articles
Dynamic Graph CNN for Learning on Point Clouds

Point clouds provide a flexible and scalable geometric representation suitable for countless applications in computer graphics; they also comprise the raw output of most 3D data acquisition devices. Hence, the design of intelligent computational models that act directly on point clouds is critical, especially when efficiency considerations or noise preclude the possibility of expensive denoising and meshing procedures. While hand-designed features on point clouds have long been proposed in graphics and vision, however, the recent success of convolutional neural networks for image analysis suggests the value of adapting insight from CNN to the point cloud world. We propose a new neural network module EdgeConv suitable for CNN-based high-level tasks on point clouds including classification and segmentation. EdgeConv is differentiable and can be plugged into existing architectures. Compared to existing modules operating in extrinsic space or treating each point independently, EdgeConv has several appealing properties: It incorporates local neighborhood information; it can be stacked to learn global shape properties; and in multi-layer systems affinity in feature space captures semantic characteristics over potentially long distances in the original embedding. Beyond proposing this module, we provide extensive evaluation and analysis revealing that EdgeConv captures and exploits fine-grained geometric properties of point clouds.

Neural Importance Sampling

We propose to use deep neural networks for generating samples in Monte Carlo integration. Our work is based on non-linear independent component analysis, which we extend to improve performance and enable its application to integration problems. First, we introduce piecewise-polynomial coupling transforms that greatly increase the modeling power of individual coupling layers. Second, we preprocess the inputs of neural networks using one-blob encoding, which stimulates localization of computation and improves inference. Third, we derive a gradient-descent-based optimization for the KL and Chi2 divergence for the specific application of Monte Carlo integration with unnormalized stochastic estimates of the target distribution. Our approach enables fast and accurate inference and efficient sample generation independent of the dimensionality of the integration domain. We demonstrate the benefits of our approach for generating natural images and in two applications to light-transport simulation. First, we show how to learn joint path-sampling densities in primary sample space and how to importance sample multi-dimensional path prefixes thereof. Second, we use our technique to extract conditional directional densities driven by the triple product of the rendering equation, and leverage them for path guiding. In all applications, our approach yields on-par or higher performance than competing techniques at equal sample count.

Non-Smooth Newton Methods for Deformable Multi-Body Dynamics

We present a framework for the simulation of rigid and deformable bodies in the presence of contact and friction. In contrast to previous methods which solve linearized models, we develop a non-smooth Newton method that solves the underlying nonlinear complementarity problems (NCPs) directly. Our method supports two-way coupling between hyperelastic deformable bodies and articulated rigid mechanisms, includes a nonlinear contact model, and requires only the solution of a symmetric linear system as a building block. We propose a new complementarity preconditioner that improves convergence, and develop an efficient GPU-based solver based on the conjugate residual (CR) method that is suitable for interactive simulations. We show how to improve robustness using a new geometric stiffness approximation and evaluate our method's performance on a number of robotics simulation scenarios, including dexterous manipulation and training using reinforcement learning.

Constructing 3D Self-Supporting Surfaces with Isotropic Stress Using 4D Minimal Hypersurfaces of Revolution

This paper presents a new computational framework for constructing 3D self-supporting surfaces. Inspired by the self-supporting property of catenary and the fact that catenoid, the surface of revolution of the catenary curve, is a minimal surface, we discover the relation between 3D self-supporting surfaces and 4D minimal hypersurfaces (which are 3-manifolds). We prove that the hyper-generatrix of a 4D minimal hyper-surface of revolution is a 3D self-supporting surface, implying that constructing a 3D self-supporting surface is equivalent to volume minimization. We show that the energy functional is simply the surface's gravitational potential energy, which in turns can be converted into a surface reconstruction problem with mean curvature constraint. Armed with our theoretical findings, we develop an iterative algorithm to construct 3D self-supporting surfaces from triangle meshes. Our method guarantees convergence and can produce near-regular triangle meshes thanks to a local mesh refinement strategy similar to centroidal Voronoi tessellation. It also allows users to tune the geometry via specifying either the zero potential surface or its desired volume. We show that 1) given a boundary condition, if a stable minimal surface exists, so does the 3D self-supporting surface; and 2) the solution is not unique in general.

Neural Rendering and Reenactment of Human Actor Videos

We propose a method for generating (near) video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models, and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model, and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked in order to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state-of-the-art in learning-based human image synthesis.

Colored Fused Filament Fabrication

Filament fused fabrication is the method of choice for printing 3D models at low cost. Unfortunately, filament printers cannot reproduce colored objects. The best current techniques rely on a form of dithering exploiting occlusion, that was only demonstrated for shades of two base colors and that behaves differently depending on surface slope. We explore a novel approach for 3D printing colored objects, capable of creating controlled gradients of varying sharpness. Our technique exploits off-the-shelves nozzles that are designed to mix multiple filaments in a small melting chamber, obtaining intermediate colors once the mix is stabilized. We exploit this property to produce color gradients. We divide each input layer into a set of sublayers, each having a different constant color. By locally changing the thickness of the sublayers, we change the color perceived at a given location. By optimizing the choice of colors of each sublayer, we further improve quality and allow the use of different numbers of input filaments. We demonstrate our results by building a functional color printer using low cost, off-the-shelves components. Using our tool a user can paint a 3D model and directly produce its physical counterpart, using any material and color available for fused filament fabrication.

Microfacet BSDFs Generated from NDFs and Explicit Microgeometry

Microfacet distributions are considered nowadays as one of the most physically plausible representations. Many authors have focused on their physical and mathematical correctness, while proposing means to enlarge the range of possible visual effects. This paper proposes a study on the normal distribution functions (NDF), and particularly the influence of its shape on the final aspect of the rendered surfaces. We first propose a classification of the existing NDF based on a more general representation called SGTD, that clearly identifies the desired properties of each of them. More importantly, we provide a complete framework for studying the NDF impact on the observed Bidirectional Scattering Distribution Function (BSDF). In order to explore the very general NDFs and including anisotropic materials, we propose to use a piecewise continuous representation. It is derived with its associated Smith GAF and importance sampling for ensuring efficient global illumination computations. A new procedure is also proposed in this paper for generating an explicit geometric micro surface, used to evaluate analytic models validity and multiple scattering effects. The rendering results are produced with a computer-generated process using path tracing, demonstrating that this generation procedure is suitable with any NDF model, independently on its shape complexity.

A Luminance-Aware Model of Judder Perception

The perceived discrepancy between continuous motion as seen in nature and frame-by-frame exhibition on a display, sometimes termed judder, is an integral part of video presentation. Over time, content creators have developed a set of rules and guidelines for maintaining a desirable cinematic look under the restrictions placed by display technology without incurring prohibitive judder. With the advent of novel displays capable of high brightness, contrast, and frame rates, these guidelines are no longer sufficient to present audiences with a uniform viewing experience. In this work, we analyze the main factors for perceptual motion artifacts in digital presentation and gather psychophysical data to generate a model of judder perception. Our model enables applications like matching perceived motion artifacts to a traditionally desirable level and maintain a cinematic motion look.

All ACM Journals | See Full Journal Index

Search TOG
enter search term and/or author name