PyTorch

Facebook developed PyTorch, releasing it in 2016. Although it is based on Torch, PyTorch is faster, leaner, and more sophisticated. Torch wasn’t meeting the needs for Facebook so they completely rewrote it in Python, adding more features along the way. Whereas TensorFlow and Caffe are static in nature, requiring the rebuilding of a model for any kind of behavior change, PyTorch supports changing behaviors without the need to build a new model from scratch.  

Project Background

  • Framework: PyTorch
  • Author: Facebook
  • Initial Release: September 2016
  • Type: Complete ML Framework
  • License: BSD
  • Contains: Wide array of modules
  • Language: PYthon and C++
  • Github: pytorch
  • Runs On: x86-64 and IA-32
  • Twitter: PyTorch

Summary

  • Developed in Python
  • Fast and lean package
  • Neural Networks (NN) can be written in Python
  • Strong support for GPU / CUDA. Integrates with INtel MKL and Nvidia
  • Mature CPU and GPU backends, used for years by Facebook
  • Comes with Tensor computation (similar to NumPy)
  • Functionality can be extended with NumPy, SciPy, and Cython
  •  Components: 1) torch.autograd 2) torch Tensor library 3) torch.nn 4) torch.util 5) torch.multiprocessing and 6) torch.jit
  • PyTorch is used to replace NumPy since NumPy does not support GPU’s out of the box. Also used as a deep learning research platform
  • Provides lots of Tensor routines to support slicing, math operations, indexing, reductions, and linear algebra
  • Building NN: uses a technique called tape recorder playing
  • Different then Theano, TensorFlow, Caffe, and CNTK because these tools are static, requiring the building of a new NN when behavior changes. PyTorch allows NN to change behavior using reverse-mode auto-differentiation as shown in the illustration below:

 

Scroll to Top