Scalable Parallel Computing Lab
Tal Ben-Nun is a researcher at the Scalable Parallel Computing Laboratory (SPCL) at ETH Zurich. He received his Ph.D. in Computer Science and Computational Chemistry from the Hebrew University of Jerusalem in 2016. During his Ph.D., he has achieved scientific breakthroughs in Solution Small-Angle X-ray Scattering (SAXS) analysis using massively parallel programming models and nonlinear optimization, reducing computations from a month to seconds.
Today, he is researching the intersections between high-performance computing, machine learning, computational science, and parallel and distributed programming models. His research includes maximizing the utilization of supercomputers for large-scale distributed deep learning and simulation applications, developing a data-centric parallel programming model for scientific computing applications, and advancing machine comprehension of code via deep learning and partial compilation. His research interests also include operating systems, where he is researching novel process scheduling mechanisms for heterogeneous resources.
Demystifying Parallel and Distributed Deep Learning
Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. The talk outlines deep learning from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning.