Applied Defenses Against Adversarial Attacks

Open Access
- Author:
- Payne, Collin Michael
- Area of Honors:
- Security and Risk Analysis
- Degree:
- Bachelor of Science
- Document Type:
- Thesis
- Thesis Supervisors:
- Edward J Glantz, Thesis Supervisor
Edward J Glantz, Thesis Honors Advisor
Alison Ryan Murphy, Faculty Reader - Keywords:
- machine learning
artificial intelligence
adversarial attacks
security and risk analysis
machine learning security
adversarial machine learning
machine learning defense
computer vision - Abstract:
- “Do you really want to see what it looks like when two gods go to war?” This quote from the show Person of Interest describes how many see the future of adversarial machine learning. Machine learning has recently become popular among companies and institutions who are researching the various ways to apply machine learning models. These machine learning models can be beneficial, but there are also many concerns. Machine learning is the use of statistics and algorithms to find patterns in the data underlying artificial intelligence applications, such as those used in search engines and in the recommendation systems used by Amazon and Netflix. The evolution of machine learning, artificial intelligence, deep learning and neural networks has created an almost “god-like” world where models run and learn without human supervision. The trend towards an altruistic “machine-god” might actually be considered acceptable by some if not for the introduction of “adversarial machine learning” where an attacker intentionally fools or misguides the models with malicious input. The resolution to this scenario, and the purpose of this research, is the introduction of a “defensive” machine-god to offset and weaken the impact of this adversary. Thus, to answer the opening question, we do want to see two gods go to war. Otherwise, the adversary alone controls and dominates machine learning’s models.