Comparing Integrate And Fire Neuron Circuits Using TFET And CMOS Technologies
Open Access
- Author:
- Caccese, Ronald Joseph
- Area of Honors:
- Electrical Engineering
- Degree:
- Bachelor of Science
- Document Type:
- Thesis
- Thesis Supervisors:
- Vijaykrishnan Narayanan, Thesis Supervisor
Jeffrey Scott Mayer, Thesis Honors Advisor - Keywords:
- CMOS
TFET
Neuromorphic Architecture - Abstract:
- One of the most important challenges facing modern VLSI design is reducing power consumption. Modern integrated circuits are using transistors that keep decreasing the feature size, but increase overall leakage current. Along with reducing power, another popular area of research revolves around emerging computational architectures that venture away from classical styles of computing. One such emerging computational architecture gaining more and more interest from device, circuits, algorithms, and architecture researchers, is the “brain-inspired computing” using artificial neural networks (ANNs), which has shown great potential advantage over conventional computational architectures on machine learning based applications such as image and voice processing. To efficiently carry out tasks, ANNs are built with massive parallel arrays of neurons and synapses, so the power reduction of each neuron and synapse is of great significance to lower the overall system power, and enable more computation capabilities at the same power budget. This paper will combine these two areas of research and compare a neuron circuit used to build an artificial neural network using Tunnel-FET (TFET) and silicon CMOS technologies. The design will be created using Cadence Virtuoso and will successfully be able to compare design variables as well as calculate the overall average power consumed for the neuron. The design simulation results have shown that, a TFET-based neuron is capable of operation with 2.3X less energy per spikeReducing the power consumption of elements of a neural network could vastly expand the area of research involving neuromorphic computing architectures. The neuromorphic architecture is comprised of massively parallel arrays of neurons and synapses, so a decrease in power could increase the possible number of elements on a reasonable amount of power which will drastically increase performance.