Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Haskell High Performance Programming
  • Toc
  • feedback
Haskell High Performance Programming

Haskell High Performance Programming

By : Thomasson
3 (2)
close
Haskell High Performance Programming

Haskell High Performance Programming

3 (2)
By: Thomasson

Overview of this book

Haskell, with its power to optimize the code and its high performance, is a natural candidate for high performance programming. It is especially well suited to stacking abstractions high with a relatively low performance cost. This book addresses the challenges of writing efficient code with lazy evaluation and techniques often used to optimize the performance of Haskell programs. We open with an in-depth look at the evaluation of Haskell expressions and discuss optimization and benchmarking. You will learn to use parallelism and we'll explore the concept of streaming. We’ll demonstrate the benefits of running multithreaded and concurrent applications. Next we’ll guide you through various profiling tools that will help you identify performance issues in your program. We’ll end our journey by looking at GPGPU, Cloud and Functional Reactive Programming in Haskell. At the very end there is a catalogue of robust library recommendations with code samples. By the end of the book, you will be able to boost the performance of any app and prepare it to stand up to real-world punishment.
Table of Contents (16 chapters)
close
15
Index

Running with the CUDA backend

To compile using the CUDA backend, we should install the accelerate-cuda package from Hackage. Also required is the CUDA platform. Refer to the accelerate-cuda package documentation and CUDA platform documentation for further information:

cabal install accelerate-cuda -fdebug

The Haskell dependencies require some additional tools in scope, including alex, happy, and c2hs. Install those first if necessary. The debug flag gives our Accelerate CUDA programs some additional tools. There's no extra runtime cost versus no debug flag. The additional flags could interfere with the user program, though.

In principle, the only necessary code change for using the CUDA backend instead of the interpreter is to import the run function from Data.Array.Accelerate.CUDA instead of the Interpreter module:

import Data.Array.Accelerate.CUDA

The program below executes our matrix product of 100x100 matrices on the GPU using CUDA. Note that swapping back to the interpreter is a...

Unlock full access

Continue reading for free

A Packt free trial gives you instant online access to our library of over 7000 practical eBooks and videos, constantly updated with the latest in tech
bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete