Reading time: 15 minutes
In this article, we have explored some of the differences between two popular frameworks namely Torch and PyTorch. As the name suggest, both frameworks have a common origin but have taken two different paths in a quest to improve Deep Learning for all.
Initially, Torch was developed and later, PyTorch was developed as a Python implementation of Torch. Both frameworks have been developed by Facebook. Both are open source.
The development of Torch has been stopped while PyTorch is in active development.
The internal libraries used by Torch will be continued to be supported and developed as it is used by PyTorch as a wrapper.
Source code and Usage
Torch was written in Lua while PyTorch was written in Python.
PyTorch and Torch use the same C libraries that contain all the performance such as:
These libraries will continue to be shared.
There are some architectural improvements in PyTorch as well.
In Torch, there were containers. In Pytorch there are no containers hence, we need to construct our model as a subclass of Module and redefine the forward and backward step as a method.
Recurrent nets, weight sharing and memory usage are big advantages for PyTorch compared to Torch.
PyTorch has native ONNX Support while Torch does not have built-in ONNX support.
Having an ONNX support is important in the interoperaeblity of AI models. Take a look at this article to understand the concept behind ONNX.
As the development of Torch has been paused, there is no doubt that you should go with PyTorch. Even then, PyTorch is a clear win from Torch in terms of speed, industry support and ease of use.