Torch
Torch is a scientific computing framework that utilizes GPU-first machine learning algorithms. Some of its core features include a powerful N-dimensional array or Tensor, linear algebra routines, LuaJIT interface to C, routines for indexing, slicing, etc., embeddable with Android and iOS backends. The Tensor supports mathematical operations such as max, min, sum, and BLAS operations like the dot product, matrix-matrix multiplication, and matrix-vector multiplication.
Torch provides incredible flexibility and speed when it comes to building scientific algorithms in a simplified manner. The framework is backed by popular neural network and optimization libraries that provide flexibility in implementing complex neural network topologies. Many established names, including Google, Facebook, NYU, Purdue, Twitter, and other companies and research labs, use Torch.
Project Background
- Framework/Library: Torch
- Author: Ronan Collobert, Samy Bengio, Johnny Mariรฉthoz
- Initial Release: October 2002
- Type: Library for machine learning and deep learning
- License: BSD License
- Contains: torch, nn, signal, randomkit, cephes, dataload, MIDI, LuaXML, csvigo, graphicsmagick, npy4th, torchx, dpnn, unsup, manifold, nnx, iTorch, fex, to name a few.
- Language: Lua, LuaJIT, C, CUDA and C++
- GitHub: Torch7
- Runs On: Linux, Android, Mac OS X, iOS
- GitHub Discussions: None
- Twitter: Torch
- Stackflow: Torch
- Samples: Tutorials and examples
Applications
- Machine learning
- Computer vision
- Signal processing
- Parallel processing
- Image, video, and audio processing
- Networking among others
- Building arbitrary graphs of neural networks
- Parallelize graphs over CPUs and GPUs efficiently
Summary
- Scientific computing framework
- Based on Lua and runs on LuaJIT
- Strong support for CPU and CUDA (GPU)
- Large community support and many 3rd party packages
- Packages: computer vision, signal processing, machine learning, parallel processing, image, audio, video, and networking
- Its Tensor library is efficient like NumPy
- Supports acyclic computation graphs
- Loss function feature helps models learn. Feedback is provided, thereafter a computation takes place that assigns the results a value. The value grades the models performance.
- Supports N-dimensional array
- Supports routines for indexing, slicing, and transposing
In the illustration below, a neural network (NN) in Torch is comprised of modules (bricks). Modules can be combined to create more complex neural networks. This particular NN is an image classifier. The first layer is the input layer, which then feeds the next layer until it reaches the output layer.

When dealing with images, videos, text, and audio, the commands like image.load and audio.load can be used in Torch. The five steps to creating a simple NN are listed below.
- Load data
- Define NN parameters
- Define loss function
- Train
- Test
For the illustration below, CIFAR-10 dataset is used. The dataset has 60,000 colored images, and 10 classes with 10,000 images per class.
