Curriculum Vitae | Google Scholar | Linkedin
My research aims to make games and Augmented/Virtual Reality more believable by drawing on ideas in computational physics, psychophysics, graphics and acoustics. I like working on research problems whose results accrue to functioning real-world systems. Over the past decade I have advanced physical modeling techniques for real-time sound synthesis and propagation. These are being used in shipping interactive spatial audio systems and games, experienced by millions of people worldwide.
Nikunj leads research projects at Microsoft Research's Redmond labs in the areas of spatial audio, computational acoustics and computer graphics for games and AR/VR. As senior researcher he's tasked with conceiving new research directions, working on the technical problems with collaborators, mentoring interns, publishing findings, giving talks, and engaging closely with engineering groups to translate the ideas into real-world impact. Over the last decade he has led project Triton, a first-of-its-kind wave acoustics system that is now in production use in multiple major Microsoft products such as Gears of War & Windows 10. Triton is currently in the process of being opened up for external use as part of Project Acoustics. Nikunj has published and given talks at top academic and industrial venues across disciplines: ACM SIGGRAPH, Audio Engineering Society, Acoustical Society of America, and Game Developers Conference and served on the SIGGRAPH papers committee. Before Microsoft, he did his PhD studies at UNC Chapel Hill, where his work helped initiate sound as a new research direction. His entire thesis code was licensed by Microsoft.
Shipped products
Triton is the first demonstration that high-quality wave acoustics can be made feasible for production games and Augmented/Virtual Reality. It accurately models sound wave propagation effects (including diffraction) on full 3D game maps for moving sources and listener. A key novel aspect is providing the sound designer intuitive controls to dynamically tweak the physical acoustics. This results in believable environmental effects that transition smoothly as sounds and player move through the world, such as scene-dependent reverberation, smooth occlusion, obstruction and portaling effects.
Chakravarty R. Alla Chaitanya, John M. Snyder, Keith Godin, Derek Nowrouzezahrai, Nikunj Raghuvanshi
IEEE Transactions on Visualization and Computer Graphics (IEEE VR 2019), 25(4), April 2019 (to appear)
Precomputed techniques for sound and light need to decide on key "probe" locations where the propagating field is sampled and then carefully interpolate the result during gameplay. Most prior work employs uniform sampling for this purpose. However, narrow spaces such as corridors represent a big problem. One must choose between sampling finely and wasting a lot of samples in wide open regions, or risk missing an entire corridor in the sampling, which will cause audible issues at runtime. We present an adaptive sampling approach that resolves these issues by smoothly varying probe density based on a novel “local diameter” measure of the space surrounding any given point. To cope with the resulting unstructured sampling, we also propose a novel interpolator based on radial weights over geodesic shortest paths, that achieves smooth acoustic effects that respect scene boundaries, unlike existing visibility-based methods that can cause audible discontinuities on motion.
Zechen Zhang, Nikunj Raghuvanshi, John Snyder, Steve Marschner
ACM Transactions on Graphics (SIGGRAPH Asia 2018), 37(6), November 2018
Large extended sources, like waves on a beach or rain are commonly used in games and VR to provide an active, immersive ambience to scenes. The propagation of such sounds through 3D scenes is a daunting computational challenge making it common to use hacks based on point sources that can sound unconvincing. We discuss a novel incoherent source formulation for efficient wave-based precomputation on 3D scene geometry. Simulated directional loudness information is compressed using spherical harmonics, and rendered with a simple and efficient rendering algorithm. Overall, the system is light on RAM (~1MB per source) and CPU, enabling immersive spatial audio rendering of ambient sources for todays games and VR.
Nikunj Raghuvanshi and John Snyder, ACM Transactions on Graphics (SIGGRAPH), 37(4), August 2018
Prior wave approaches to precomputed ("baked") sound propagation didn't model directional acoustic effects from geometry, instead using simplifications like initial sound arriving at listener in line of sight direction, potentially through walls, and reverberation that arrives at the listener from all directions. While these approximations work reasonably when analyzing acoustics in a single room, today's game scenes have vast maps with many rooms and partially or fully outdoor areas. In this paper we presented the first system to efficiently precompute and render immersive spatial wave-acoustic effects, like diffracted sound arriving around doorways, directional reflections, and anisotropic reverberation within RAM and CPU footprints immediately practical for games and VR.
Andrew Allen and Nikunj Raghuvanshi, ACM Transactions on Graphics (SIGGRAPH), 34(4), July 2015
This paper describes the first real-time technique to synthesize full-audible-bandwidth sounds for 2D virtual wind instruments. The user is presented with a sandbox interface where they can draw any bore shape and create tone holes, valves or mutes. The system is always online, synthesizing sounds from the drawn geometry as governed by wave physics. Our main contribution is an interactive wave solver that executes entirely on modern graphics cards with a novel numerical formulation supporting glitch-free online geometry modification.
Nikunj Raghuvanshi and John Snyder, ACM Transactions on Graphics (SIGGRAPH), 33(4), July 2014
This paper presents a precomputed wave propagation technique that is immediately practical for games and VR. We demonstrate convincing spatially-varying effects in complex scenes including occlusion/obstruction and reverberation. The technique simultaneously reduces the memory and signal processing computation by orders of magnitude compared to prior wave-based approaches. The key observation is that while raw acoustic fields are quite chaotic in complex scenes and depend sensitively on source and listener location, perceptual parameters derived from these fields, such as loudness or reverberation time, are far smoother, and thus amenable to efficient representation.
This paper provides the framework underlying Triton.
Shipped in Crackdown 2 (with Guy Whitmore and Kristofor Mellroth at Microsoft Game Studios)
Brandon Lloyd, Nikunj Raghuvanshi, Naga K. Govindaraju, ACM Symposium on Interactive 3D Graphics and Games (I3D), 2011
Video games typically store recordings of many variations of a sound event to avoid repetitiveness, such as multiple footstep sounds for a walking animation. We present a technique that can produce unlimited variations on an impact sound while usually costing about the same memory as a single clip. The main idea is to use an analysis-synthesis approach: a single audio clip is used to extract the resonant mode frequencies of the object and their time-decay, along with a fixed residual signal in time domain. The modal model is then amenable to on-the-fly variation and re-synthesis.
Nikunj Raghuvanshi, John Snyder, Ravish Mehra, Ming C. Lin, and Naga K. Govindaraju, ACM Transactions on Graphics (SIGGRAPH), 29(3), July 2010
This paper presents the first technique for precomputed wave propagation on complex, 3D game scenes. It utilizes the ARD wave solver to compute acoustic responses for a large set of potential source and listener locations in the scene. Each response is represented with the time and amplitude of arrival of multiple wavefronts, along with a residual frequency trend. Our system demonstrates realistic, wave-based acoustic effects in real time, including diffraction low-passing behind obstructions, sound focusing, hollow reverberation in empty rooms, sound diffusion in fully-furnished rooms, and realistic late reverberation.
Nikunj Raghuvanshi, Rahul Narain and Ming C. Lin, IEEE Transactions on Visualization and Computer Graphics(TVCG), 15(5), 2009
We present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, allowing the field within each rectangular spatial partition to be time-stepped without incurring numerical errors. The spatial partitions communicate using finite-difference-like linear operators, that do incur numerical error as weak artificial reflections. The use of analytic solutions allows this technique to provide reasonable accuracy at low spatial resolutions close to the Nyquist limit.
Nikunj Raghuvanshi and Ming C. Lin, ACM Symposium of Interactive 3D Graphics and Games (I3D), 2006
We present various perceptually-based optimizations for modal sound synthesis that allow scalable synthesis for virtual scenes with hundreds of sounding objects.