PyTorch has become one of the more important Python libraries for people working in data science and AI. Microsoft recently added enterprise support for PyTorch deep learning on Azure. PyTorch has also become the standard for AI workloads at Facebook. Google’s TensorFlow and PyTorch integrate with important Python add-ons like NumPy and data-science tasks that require faster GPU processing. SEE: Hiring Kit: Python developer (TechRepublic Premium) The PyTorch linear algebra module torch.linalg has moved to stable in version 1.9, giving NumPy users a familiar add-on to work with maths, according to release notes. Per those release notes, the module “extends PyTorch’s support for it with implementations of every function from NumPy’s linear algebra module (now with support for accelerators and autograd) and more, like torch.linalg.matrix_norm and torch.linalg.householder_product.” Also moving to stable is the Complex Autograd feature to provide users a way to “calculate complex gradients and optimize real valued loss functions with complex variables.” “This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI,” the PyTorch project notes. There are also some debugging goodies in this release with a new torch.use_determinstic_algorithms option. Enabling this makes operations behave deterministically, if possible, otherwise it will produce a runtime error if they might behave nondeterministically. There’s a new beta of the torch.special module — similar to SciPy’s special module. It brings many functions that are helpful for scientific computing and working with distributions such as iv, ive, erfcx, logerfc, and logerfcx. And this version brings the PyTorch Mobile interpreter, which is made for executing programs on edge devices. It’s a slimmed down version of the PyTorch runtime. This should make big cuts to the binary size compared to the current on-device runtime. “The current pt size with MobileNetV2 in arm64-v8a Android is 8.6 MB compressed and 17.8 MB uncompressed. Using Mobile Interpreter, we are targeting at the compressed size below 4 MB and uncompressed size below 8MB,” the PyTorch project notes. Mobile app developers can also use the TorchVision library on their iOS and Android apps. The library contains C++ TorchVision ops to help with tasks like object detection and segmentation in videos and images. SEE: This old programming language is suddenly hot again. But its future is still far from certain There are several additions to help with distributed training for machine-learning algorithms. TorchElastic is now in beta but part of core PyTorch, and is used to “gracefully handle scaling events”. There’s also CUDA support for RPC. CUDA RPC sends Tensors from local CUDA memory to remote CUDA memory for more efficient peer-to-peer Tensor communication. On the performance front, this version of PyTorch also brings the stable release of the Freezing application protocol interface (API), a beta of the PyTorch Profiler, a beta of the Inference Mode API, and a beta of torch.package, a new way to package PyTorch models.