Machine vision is a computationally expensive problem with an exceptionally large
number of real-world applications. With the rise of the Internet of Things and the presence of
wearables in day to day settings, there is an additional focus on power constraints and the
limitations of fixed hardware. In a vision pipeline, the accuracy of the object classification stage
will likely affect the usefulness of the pipeline as a whole. However, we find that it is difficult to
create a system with the ability to recognize a large number of objects both quickly and
accurately because the number of classifiers needed grows with the number of objects. We
observe that real world images and the objects in them tend to be sensible and expose
relationships between objects and scenes that are used by humans intuitively. This high-level
context could potentially be used to inform and improve object classification by allowing us to
make reasonable, probabilistic guesses about objects that might occur based on other information
that we have about the image. This guesswork will lower the number of classifiers that need to
be run, which will also address power and timing concerns. In this paper, we explore the
meaning of context, design a framework to store it in a way accessible to a computer, and then
evaluate the efficacy of context-based filtering.