Neural computing is an emerging research topic today due to its massive increase in demand and applications for machine learning. In this virtual simulation research work, using a free software, a program has been trained a neural network model and translate its functionality into the hardware. In the context of analog neural network, this research seeks to verify a shift sigmoid function that can approximate the transfer function of CMOS inverter. By showing this approximation accurately and reducing the number of components, it would help to implement the neural network based integrated chips. A conciliation is selected for the distance matric of the proposed function. This distance metric between the given CMOS transfer function and the shifted sigmoid function is minimized using the gradient descent. However, this approximate transfer function of CMOS inverter is chosen to verify in a three-layer perceptron networks. The network topology randomly generates weights to provide a diverse set of truth tables. We report two networks whose weights are chosen randomly using a back propagation algorithm due to volatile nature of the network topology and the activation function. The results of this research conclude that the transfer function of CMOS inverter is able to approximate the CMOS transfer function adequately for the purposes of these perceptron networks.
Published in | Science Journal of Circuits, Systems and Signal Processing (Volume 9, Issue 1) |
DOI | 10.11648/j.cssp.20200901.13 |
Page(s) | 24-30 |
Creative Commons |
This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited. |
Copyright |
Copyright © The Author(s), 2020. Published by Science Publishing Group |
Analog Components, Artificial Neural Network, Machine Learning, Universal Gates, Virtual Network
[1] | T. Tuma, A. Pantazi, M. Gallo, A. Sebastian and E. Eleftheriou, “Stochastic phase-change neurons,” Nature Nanotechnology, vol. 11, pp. 693-699, May 2016. |
[2] | C. D. Wright, “Phase-change devices: Crystal-clear neuronal computing,” Nature Nanotechnology, vol. 11, pp. 655–656, May 2016. |
[3] | I. Aleksander, Neural computing architectures: the design of brain-like machines, London: North Oxford Academic, 1989. |
[4] | S. Furber and S. Temple, “Neural systems engineering,” Journal of The Royal Society Interface, vol. 4, no. 13, pp. 193–206, 2006. |
[5] | H. P. Graf, L. D. Jackel and W. E. Hubbard, “VLSI Implementation of a neural network model,” Computer, vol. 21, no. 3, pp. 41-49, March 1988. |
[6] | A. E. Pereda, “Electrical synapses and their functional interactions with chemical synapses,” Nature Reviews Neuroscience, vol. 15, no. 4, pp. 250-263, April 2014. |
[7] | G. Gomes, T. Ludermir and L. Lima, “Comparison of new activation functions in neural network for forecasting financial time series,” Neural Computing and Applications, vol. 20, no. 3, pp. 417-439, April 2011. |
[8] | B. M. Wilamowski, J. Binfet and M. O. Kaynak, “VLSI Implementation of Neural Networks,” International Journal of Neural Systems, vol. 10, no. 3, pp. 191-197, June 2000. |
[9] | R. E. Maeder, The Mathematica Programmer, Academic Press, Inc., 1994. |
[10] | K. Hirasawa, M. Ohbayashi, M. Koga and M. Harada, “Forward propagation universal learning network,” IEEE International Conference on Neural Networks, Washington, DC, USA, 3-6 June 1996. |
[11] | M. Jabri, S. Pickard, P. Leong, G. Rigby, J. Jiang, B. Flower and P. Henderson,“VLSI implementation of neural networks with application to signal processing,” IEEE International Symposium on Circuits and Systems, pp. 1275-1278, 11-14 June 1991. |
[12] | X. Li, J. Qin, B. Huang, X. Zhang and J. B. Bernstein, “A new SPICE reliability simulation method for deep submicrometer CMOS VLSI circuits,” IEEE Transactions on Device and Materials Reliability, Vol. 6, No. 2, pp. 247-257, June 2006. |
[13] | M. Valle, “Analog VLSI Implementation of Artificial Neural Networks with Supervised On-Chip Learning,” Analog Integrated Circuits and Signal Processing, vol. 33, no. 3, pp. 263-287, December 2002. |
[14] | B. Vines and M. H. Rashid, “Memristors: The fourth fundamental circuit element,” IEEE International Conference on Electrical and Electronics Engineering, Bursa, Turkey, 5-8 Nov. 2009. |
[15] | A. Ardakani, F. Leduc-Primeau, N. Onizawa, T. Hanyu and W. J. Gross, “VLSI Implementation of Deep Neural Network Using Integral Stochastic Computing,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. PP, no. 99, pp. 1-12, 2017. |
[16] | V. Balan, “A low-voltage regulator circuit with self-bias to improve accuracy,” IEEE Journal of Solid-State Circuits, vol. 38, no. 2, pp. 365-368, Feb 2003. |
[17] | A. K. Shrinath, “Analog VLSI Implementation of Neural Network Architecture,” International Journal of Science and Research, vol. 4, no. 2, pp. 653-656, February 2015. |
[18] | B. Razavi, Design of Analog CMOS Integrated Circuits, Second Edition, Mc Graw Hill Education, 2016. |
[19] | J. Cho, Y. Jung, S. Lee and Y. Jung, “VLSI Implementation of Restricted Coulomb Energy Neural Network with Improved Learning Scheme,” Electronics, vol. 8, no. 563, pp. 1-13, May 2019. |
[20] | M. Yamaguchi, G. Iwamoto, H. Tamukoh and T. Morie, “An Energy-efficient Time-domain Analog VLSI Neural Network Processor based on a Pulse-width Modulation Approach,” Computer Science, Emerging Technologies, Cornell University, pp. 1-13, February 2019. |
[21] | Q. Wang, H. Tamukoh and T. Morie, “A Time-domain Analog Weighted – sum Calculation Model for Extremely Low Power VLSI Implementation of Multi-layer Neural Networks,” Computer Science, Emerging Technologies, Cornell University, October 2018. |
[22] | S. Mada and S. Mandalika, "Analog Implementation of Artificial Neural Networks Using Forward Only Computation," Asia Modelling Symposium (AMS), Kota Kinabalu, 4-6 December 2017, pp. 3-9. |
APA Style
Muhammad Sana Ullah, William Brickner, Emadelden Fouad. (2020). Use of Virtual Forward Propagation Network Model to Translate Analog Components. Science Journal of Circuits, Systems and Signal Processing, 9(1), 24-30. https://doi.org/10.11648/j.cssp.20200901.13
ACS Style
Muhammad Sana Ullah; William Brickner; Emadelden Fouad. Use of Virtual Forward Propagation Network Model to Translate Analog Components. Sci. J. Circuits Syst. Signal Process. 2020, 9(1), 24-30. doi: 10.11648/j.cssp.20200901.13
AMA Style
Muhammad Sana Ullah, William Brickner, Emadelden Fouad. Use of Virtual Forward Propagation Network Model to Translate Analog Components. Sci J Circuits Syst Signal Process. 2020;9(1):24-30. doi: 10.11648/j.cssp.20200901.13
@article{10.11648/j.cssp.20200901.13, author = {Muhammad Sana Ullah and William Brickner and Emadelden Fouad}, title = {Use of Virtual Forward Propagation Network Model to Translate Analog Components}, journal = {Science Journal of Circuits, Systems and Signal Processing}, volume = {9}, number = {1}, pages = {24-30}, doi = {10.11648/j.cssp.20200901.13}, url = {https://doi.org/10.11648/j.cssp.20200901.13}, eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.cssp.20200901.13}, abstract = {Neural computing is an emerging research topic today due to its massive increase in demand and applications for machine learning. In this virtual simulation research work, using a free software, a program has been trained a neural network model and translate its functionality into the hardware. In the context of analog neural network, this research seeks to verify a shift sigmoid function that can approximate the transfer function of CMOS inverter. By showing this approximation accurately and reducing the number of components, it would help to implement the neural network based integrated chips. A conciliation is selected for the distance matric of the proposed function. This distance metric between the given CMOS transfer function and the shifted sigmoid function is minimized using the gradient descent. However, this approximate transfer function of CMOS inverter is chosen to verify in a three-layer perceptron networks. The network topology randomly generates weights to provide a diverse set of truth tables. We report two networks whose weights are chosen randomly using a back propagation algorithm due to volatile nature of the network topology and the activation function. The results of this research conclude that the transfer function of CMOS inverter is able to approximate the CMOS transfer function adequately for the purposes of these perceptron networks.}, year = {2020} }
TY - JOUR T1 - Use of Virtual Forward Propagation Network Model to Translate Analog Components AU - Muhammad Sana Ullah AU - William Brickner AU - Emadelden Fouad Y1 - 2020/07/17 PY - 2020 N1 - https://doi.org/10.11648/j.cssp.20200901.13 DO - 10.11648/j.cssp.20200901.13 T2 - Science Journal of Circuits, Systems and Signal Processing JF - Science Journal of Circuits, Systems and Signal Processing JO - Science Journal of Circuits, Systems and Signal Processing SP - 24 EP - 30 PB - Science Publishing Group SN - 2326-9073 UR - https://doi.org/10.11648/j.cssp.20200901.13 AB - Neural computing is an emerging research topic today due to its massive increase in demand and applications for machine learning. In this virtual simulation research work, using a free software, a program has been trained a neural network model and translate its functionality into the hardware. In the context of analog neural network, this research seeks to verify a shift sigmoid function that can approximate the transfer function of CMOS inverter. By showing this approximation accurately and reducing the number of components, it would help to implement the neural network based integrated chips. A conciliation is selected for the distance matric of the proposed function. This distance metric between the given CMOS transfer function and the shifted sigmoid function is minimized using the gradient descent. However, this approximate transfer function of CMOS inverter is chosen to verify in a three-layer perceptron networks. The network topology randomly generates weights to provide a diverse set of truth tables. We report two networks whose weights are chosen randomly using a back propagation algorithm due to volatile nature of the network topology and the activation function. The results of this research conclude that the transfer function of CMOS inverter is able to approximate the CMOS transfer function adequately for the purposes of these perceptron networks. VL - 9 IS - 1 ER -