Variational Generative Models

Open Access
Liu, Shikun
Area of Honors:
Bachelor of Science
Document Type:
Thesis Supervisors:
  • C. Lee Giles, Thesis Supervisor
  • Sergei Tobachnikov , Honors Advisor
  • variational inference
  • generative model
  • 3D Shape Learning
One may ask how to produce examples which are similar in the database, but not exactly the same? This thesis introduces fundamental concepts in Variational Generative Model which is one of the first of its kind to successfully generate unseen realistic samples. Variational networks use reparameterization of the variational lower bound to build a differentiable approximate posterior inference with continuous latent variables. The weak assumptions and simple gradient-based optimization contribute to a quick popularity in such frameworks. The main contribution of this thesis is to introduce a novel variational architecture on 3D shape understanding. We wish to build a generative model that (i) applies to multiple types of data and is able to learn hierarchical features of examples (ii) instantly learns new concepts via small samples and conform to rapid adaptation to new data. Our purposed model successfully attack both of these problems and have achieved superior results. We design a hierarchical variational network to learn voxelized 3D shapes in a supervised manner. The model successfully learns 3D shape style in a hierarchical latent representation, and we may generate realistic 3D objects by sampling its latest probabilistic manifold. We use the learned embedding to do unsupervised 3D shape classification and successfully achieve a state-of-the-art result.