In this work, we adopt a mindset of extending machine learning (ML) ingenuity by focusing on an alternative to a current driver of reinforced intelligence – typically labeled as artificial neural networks (ANN) – in the form of a neuromorphic brain-inspired architecture: a spiking neural network (SNN). The ever-growing field of ML offers a wide array of extremely impressive computer-driven automated operations, like image recognition, pattern analysis and data recognition service; these expenses do not come without ramifications of resource-intensive requirements and scalable computational calculations. Research of current SNNs imply that these extensive operations can be performed in similar fashions to the functionality of the cerebral cortex of the brain, ideally mimicking the efficiency of human thought-processing while reducing the latency for computations of extreme magnitudes. This thesis aims to build upon that trend of reducing latency to maximize network training efficiency.