ACM Transactions on

Graphics (TOG)

Latest Articles

Blockwise Multi-Order Feature Regression for Real-Time Path-Tracing Reconstruction

Path tracing produces realistic results including global illumination using a unified simple... (more)

Colored Fused Filament Fabrication

Fused filament fabrication is the method of choice for printing 3D models at low cost and is the de facto standard for hobbyists, makers, and schools. Unfortunately, filament printers cannot truly reproduce colored objects. The best current techniques rely on a form of dithering exploiting occlusion, which was only demonstrated for shades of two... (more)

A Luminance-aware Model of Judder Perception

The perceived discrepancy between continuous motion as seen in nature and frame-by-frame exhibition on a display, sometimes termed judder, is an integral part of video presentation. Over time, content creators have developed a set of rules and guidelines for maintaining a desirable cinematic look under the restrictions placed by display technology... (more)

Microfacet BSDFs Generated from NDFs and Explicit Microgeometry

Microfacet distributions are considered nowadays as a reference for physically plausible BSDF representations. Many authors have focused on their... (more)

Terrain Amplification with Implicit 3D Features

While three-dimensional landforms, such as arches and overhangs, occupy a relatively small proportion of most computer-generated landscapes, they are distinctive and dramatic and have an outsize visual impact. Unfortunately, the dominant heightfield representation of terrain precludes such features, and existing in-memory volumetric structures are... (more)


Submission Requirements

ACM TOG now follows the new submission procedures of ACM. See the Author Guidelines for details.  

About TOG

ACM TOG is the foremost peer-reviewed journal in the area of computer graphics. 

Recent impact factor calculations from Thomson Reuters give ACM TOG an impact factor of 6.495 and an Eigenfactor Score of 0.032, giving it the top ranking among all ACM journals. 

read more
Forthcoming Articles
Non-Line-of-Sight Reconstruction using Efficient Transient Rendering

Being able to see beyond the direct line of sight is an intriguing prospective and could benefit a wide variety of important applications. Recent work has demonstrated that time-resolved measurements of indirect diffuse light contain valuable information for reconstructing shape and reflectance properties of objects located around a corner. In this paper, we introduce a novel reconstruction scheme that, by design, produces solutions that are consistent with state-of-the-art physically-based rendering. Our method combines an efficient forward model (a custom renderer for time-resolved three-bounce indirect light transport) with an optimization framework to reconstruct object geometry in an analysis-by-synthesis sense. We evaluate our algorithm on a variety of synthetic and experimental input data, and show that it gracefully handles uncooperative scenes with high levels of noise or non-diffuse material reflectance.

Seamless Parametrization with Arbitrary Cones for Arbitrary Genus

Seamless global parametrization of surfaces is a key operation in geometry processing, e.g. for high-quality quad mesh generation. A common approach is to prescribe the parametric domain structure, in particular the locations of singularities (cones), and solve a non-convex optimization problem minimizing a distortion measure, with local injectivity imposed through constraints or barrier terms. In both cases, an initial valid parametrization is essential to serve as feasible starting point for obtaining an optimized solution. While convexified versions of the constraints eliminate this initialization requirement, they narrow the range of solutions, causing some problem instances that actually do have a solution to become infeasible. We demonstrate that for arbitrary given sets of topologically admissible parametric cones with prescribed curvature, a global seamless parametrization always exists (with the exception of one well-known case). Importantly, our proof is constructive and directly leads to a general algorithm for computing such parametrizations. Most distinctively, this algorithm is bootstrapped with a convex optimization problem (solving for a conformal map), in tandem with a simple linear equation system (determining a seamless modification of this map). This map can then serve as valid starting point and be optimized with respect to specific distortion measures using injectivity preserving methods.

Manipulating Attributes of Natural Scenes via Hallucination

In this study, we explore building a two-stage framework for enabling users to directly manipulate high-level attributes of a natural scene. The key to our approach is a deep generative network which can hallucinate images of a scene as if they were taken at a different season (e.g. during winter), weather condition (e.g. in a cloudy day) or time of the day (e.g. at sunset). Once the scene is hallucinated with the given attributes, the corresponding look is then transferred to the input image while preserving the semantic details intact, giving a photo-realistic manipulation result. As the proposed framework hallucinates what the scene will look like, it does not require any reference style image as commonly utilized in most of the appearance or style transfer approaches. Moreover, it allows to simultaneously manipulate a given scene according to a diverse set of transient attributes within a single model, eliminating the need of training multiple networks per each translation task. Our comprehensive set of qualitative and quantitative results demonstrate the effectiveness of our approach against the competing methods.

Gaze-Contingent Ocular Parallax Rendering for Virtual Reality

Immersive computer graphics systems strive to generate perceptually realistic user experiences. Current-generation virtual reality (VR) displays are successful in accurately rendering many perceptually important effects, including perspective, disparity, motion parallax, and other depth cues. In this paper we introduce ocular parallax rendering, a technology that accurately renders small amounts of gaze-contingent parallax capable of improving depth perception and realism in VR. Ocular parallax describes the small amounts of depth-dependent image shifts on the retina that are created as the eye rotates. The effect occurs because the centers of rotation and projection of the eye are not the same. We study the perceptual implications of ocular parallax rendering by designing and conducting a series of user experiments. Specifically, we estimate perceptual detection and discrimination thresholds for this effect and demonstrate that it is clearly visible in most VR applications. Additionally, we show that ocular parallax rendering provides an effective ordinal depth cue and it improves the impression of realistic depth in VR.

Deep Iterative Frame Interpolation for Full-frame Video Stabilization

Adaptive Incident Radiance Field Sampling and Reconstruction Using Deep Reinforcement Learning

Monte Carlo (MC) path tracing suffers from serious noise. Two common solutions are noise filtering that generates smooth but biased results, or sampling with a carefully crafted probability distribution function (PDF) to produce unbiased results. Both solutions benefit from efficient incident radiance field sampling and reconstruction algorithm. We propose new deep learning based approaches, Q- and R-networks, that adaptively sample and reconstruct incident radiance fields. We first propose to use a CNN-based image-direction reconstruction network (R-network) that simultaneously utilizes coherence on both the incident radiance-field space and image space. In addition, we use deep reinforcement learning (Q-network) that adaptively chooses the best action between increasing the resolution of the radiance field or doubling the sampling density on the radiance field, to maximize the reward in terms of reaching the ground-truth result. To verify benefits of our approach, we first test our method against our main application, the unbias the unbiased high-quality rendering by synthesizing PDF to guide MC sampling, to show robust improvement over the state-of-the-art methods. Additionally, we also test our method against biased applications, including preview and irradiance caching, and observe that our method achieves comparable performance with the state-of-the-art methods in the biased rendering applications.

Neural Rendering and Reenactment of Human Actor Videos

We propose a method for generating (near) video-realistic animations of real humans under user control. In contrast to conventional human character rendering, we do not require the availability of a production-quality photo-realistic 3D model of the human, but instead rely on a video sequence in conjunction with a (medium-quality) controllable 3D template model of the person. With that, our approach significantly reduces production cost compared to conventional rendering approaches based on production-quality 3D models, and can also be used to realistically edit existing videos. Technically, this is achieved by training a neural network that translates simple synthetic images of a human character into realistic imagery. For training our networks, we first track the 3D motion of the person in the video using the template model, and subsequently generate a synthetically rendered version of the video. These images are then used to train a conditional generative adversarial network that translates synthetic images of the 3D model into realistic imagery of the human. We evaluate our method for the reenactment of another person that is tracked in order to obtain the motion data, and show video results generated from artist-designed skeleton motion. Our results outperform the state-of-the-art in learning-based human image synthesis.

Algebraic Representations for Volumetric Frame Fields

Field-guided parametrization methods have proven effective for quad meshing of surfaces; these methods compute smooth cross fields to guide the meshing process and then integrate the fields to construct a discrete mesh. A key challenge in extending these methods to three dimensions, however, is representation of field values. Whereas cross fields can be represented by tangent vector fields that form a linear space, the 3D analog---an octahedral frame field---takes values in a nonlinear manifold. In this work, we describe the space of octahedral frames in the language of differential and algebraic geometry. With this understanding, we develop geometry-aware tools for optimization of octahedral fields, namely geodesic stepping and exact projection via semidefinite relaxation. Our algebraic approach not only provides an elegant and mathematically-sound description of the space of octahedral frames but also suggests a generalization to frames whose three axes scale independently, better capturing the singular behavior we expect to see in volumetric frame fields. These new odeco frames, so-called as they are represented by orthogonally decomposable tensors, also admit a semidefinite program--based projection operator. Our description of the spaces of octahedral and odeco frames suggests computing frame fields via manifold-based optimization algorithms; we show that these algorithms efficiently produce high-quality fields while maintaining stability and smoothness.

All ACM Journals | See Full Journal Index

Search TOG
enter search term and/or author name