Book Image

Python Parallel Programming Cookbook - Second Edition

By : Giancarlo Zaccone
Book Image

Python Parallel Programming Cookbook - Second Edition

By: Giancarlo Zaccone

Overview of this book

<p>Nowadays, it has become extremely important for programmers to understand the link between the software and the parallel nature of their hardware so that their programs run efficiently on computer architectures. Applications based on parallel programming are fast, robust, and easily scalable. </p><p> </p><p>This updated edition features cutting-edge techniques for building effective concurrent applications in Python 3.7. The book introduces parallel programming architectures and covers the fundamental recipes for thread-based and process-based parallelism. You'll learn about mutex, semaphores, locks, queues exploiting the threading, and multiprocessing modules, all of which are basic tools to build parallel applications. Recipes on MPI programming will help you to synchronize processes using the fundamental message passing techniques with mpi4py. Furthermore, you'll get to grips with asynchronous programming and how to use the power of the GPU with PyCUDA and PyOpenCL frameworks. Finally, you'll explore how to design distributed computing systems with Celery and architect Python apps on the cloud using PythonAnywhere, Docker, and serverless applications. </p><p> </p><p>By the end of this book, you will be confident in building concurrent and high-performing applications in Python.</p>
Table of Contents (16 chapters)
Title Page
Dedication

Heterogeneous programming with PyCUDA

The CUDA programming model (and, hence, that of PyCUDA) is designed for the joint execution of a software application on a CPU and GPU, in order to perform the sequential parts of the application on the CPU and those that can be parallelized on the GPU. Unfortunately, the computer is not smart enough to understand how to distribute the code autonomously, so it is up to the developer to indicate which parts should be run by the CPU and by the GPU.

In fact, a CUDA application is composed of serial components, which are executed by the system CPU or host, or by parallel components called kernels, which are executed by the GPU or by the device instead

A kernel is defined as a grid and can, in turn, be decomposed into blocks that are sequentially assigned to the various multiprocessors, thus implementing coarse-grained parallelism...