Neural Network Feature Extraction For Activity Recognition In Video Data

Open Access
Author:
Casterline, Kyle A
Area of Honors:
Electrical Engineering
Degree:
Bachelor of Science
Document Type:
Thesis
Thesis Supervisors:
  • Shashi Phoha, Thesis Supervisor
  • John Douglas Mitchell, Honors Advisor
Keywords:
  • features
  • PCA
  • sparsity
  • neural
  • network
  • video
  • extraction
  • spatio-temporal
  • object
  • recognition
  • classification
  • SVM
  • neighbors
  • VIRAT
Abstract:
The goal of this research is to determine how effectively various objects within a video stream can be classified using autoencoding neural network methods which extract spatio-temporal features from data. More specifically, we wish to validate a method of feature extraction using a Sparse Autoencoding Neural Network (SAENN). This feature extraction method, which has shown success in static machine vision problems, has to the best of our knowledge never before been applied to video. The performance of autoencoder–derived features is compared with Principal Component Analysis (PCA) features as a relevant baseline. The performance evaluation of the SAENN model is based on two separate factors: data reconstruction, and object classification performance. Reconstruction performance is calculated from mean squared error values, and classification accuracy values are obtained from experiments using supervised learning algorithms (support vector machines and k-nearest neighbors). The results of this research indicate that features extracted through the SAENN model outperform those extracted through PCA in both areas of comparison