Skip to main content
eScholarship
Open Access Publications from the University of California

UC San Diego

UC San Diego Electronic Theses and Dissertations bannerUC San Diego

Kernel methods for deep learning

Abstract

We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector machines for large margin classification, as well as in new models of unsupervised learning based on deep architectures. On several problems, we obtain better results than previous, leading benchmarks from both support vector machines with Gaussian kernels as well as deep belief nets. Finally, we examine the properties of these kernels by analyzing the geometry of surfaces that they induce in Hilbert space

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View