Systems and Algorithms for Real-Time Audio Signal Processing
Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Systems and Algorithms for Real-Time Audio Signal Processing

Abstract

Real-time systems are the canonical class of applications in signal processing. They drive the development of algorithms for approaching theoretical results within demanding practical constraints, and provide opportunities for devising clever ways to take advantage of hardware capabilities. State-of-the-art contributions are presented on three topics in this field.The first contribution is the hardware, firmware, and software co-design of a wearable hearing aids research system. The system is open-source, easy to develop for, and much more powerful than traditional hearing aids. Its audio performance matches that of standard hearing aids and it can run custom DSP algorithms in usermode with only 2.4 ms of latency. The system also includes local web-based user control and wearable electrophysiology. The second contribution describes using GPU "Tensor Core" matrix multiply hardware to accelerate the computation of discrete Fourier transforms of sizes which are prime or have large prime factors. This includes mapping these sizes to the power-of-2-size Tensor Cores and emulating higher-precision arithmetic with lower-precision floating point numbers. For large batch sizes and for certain transform sizes which are odd or an odd number times 2 or 4, this approach produced state-of-the-art Fourier transform throughput. Finally, two papers on algorithms design in real-time acoustic modeling for an audio spatialization system are presented. Two perceptually relevant types of diffraction are simulated with ray-based models of sound propagation. Existing methods have accuracy or performance limitations, especially in dynamic applications. A set of algorithms called Volumetric Diffraction and Transmission (VDaT) is introduced to approximate shadowed or near-shadowed diffraction by an occluding object. Similarly, Spatially Sampled Near-Reflective Diffraction (SSNRD) handles near-reflective diffraction involving the edges of reflecting objects. Both methods use ray tracing to spatially sample the scene, approximate ground truth results to within 1-3 dB,and have fast performance suitable for real-time applications. SSNRD also incorporates path generation algorithms, uses a small deep neural network (DNN) to compute the response of each acoustical path, and applies the GPU "RT core" real-time ray tracing hardware to spatial computing tasks beyond traditional ray tracing.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View