Improving Trust in Deep Learning by Augmented Deep K-Nearest Neighbors

Open Access
- Author:
- Tickner, Sarah
- Area of Honors:
- Engineering Science
- Degree:
- Bachelor of Science
- Document Type:
- Thesis
- Thesis Supervisors:
- Bruce Einfalt, Thesis Supervisor
Gary L Gray, Thesis Honors Advisor - Keywords:
- Machine Learning
Deep Learning
Neural Networks
Trust
DKNN
KNN - Abstract:
- Deep Learning is a black box process in which the input and the output are visible but the inner workings of how an input is processed are difficult to trace. Although automated classification can increase the efficiency of many processes, the outputs of machine learning algorithms are sometimes incorrect, and the lack of visibility into the processing done by these algorithms makes it difficult to assure operators that the classifications generated by these algorithms should be trusted. Many machine learning systems have a significant impact on important functions in society, so it is important that results from these systems are trustworthy. It is important to find ways to increase the assurance that Neural Networks can provide regarding the correctness of their classifications so that these algorithms can be employed safely and without causing unintended negative side-effects. This thesis examines Machine Learning and Deep Learning systems and discusses ways that these systems can provide increased assurance that they are operating only in the ways that are intended. It focuses in particular on Deep k-Nearest Neighbors, a method to look at the results of several layers of a deep neural network and gain more confidence that the result being output is correct. This method can be improved upon, and a plan for exploring several potential improvements is developed and presented in this thesis.