Deep learning has a growing history of successes, but heavy algorithms running on large graphical processing units are far from ideal. A relatively new family of deep learning methods called quantized neural networks have appeared in answer to this discrepancy. In Leapmind R&D, we are working on quantization methods, among others, for enabling efficient high-performance deep learning computation on small devices.
Joel’s full article can be found on Medium. Please have a look: https://medium.com/@joel_34050/quantization-in-deep-learning-478417eab72b