Sign In Start Free Trial
Account

Add to playlist

Create a Playlist

Modal Close icon
You need to login to use this feature.
  • Linux Kernel Programming
  • Toc
  • feedback
Linux Kernel Programming

Linux Kernel Programming

By : Kaiwan N. Billimoria
4.9 (35)
close
Linux Kernel Programming

Linux Kernel Programming

4.9 (35)
By: Kaiwan N. Billimoria

Overview of this book

The 2nd Edition of Linux Kernel Programming is an updated, comprehensive guide for new programmers to the Linux kernel. This book uses the recent 6.1 Long-Term Support (LTS) Linux kernel series, which will be maintained until Dec 2026, and also delves into its many new features. Further, the Civil Infrastructure Project has pledged to maintain and support this 6.1 Super LTS (SLTS) kernel right until August 2033, keeping this book valid for years to come! You’ll begin this exciting journey by learning how to build the kernel from source. In a step by step manner, you will then learn how to write your first kernel module by leveraging the kernel’s powerful Loadable Kernel Module (LKM) framework. With this foundation, you will delve into key kernel internals topics including Linux kernel architecture, memory management, and CPU (task) scheduling. You’ll finish with understanding the deep issues of concurrency, and gain insight into how they can be addressed with various synchronization/locking technologies (e.g., mutexes, spinlocks, atomic/refcount operators, rw-spinlocks and even lock-free technologies such as per-CPU and RCU). By the end of this book, you’ll have a much better understanding of the fundamentals of writing the Linux kernel and kernel module code that can straight away be used in real-world projects and products.
Table of Contents (16 chapters)
close
14
Other Books You May Enjoy
15
Index

Critical sections, exclusive execution, and atomicity

Imagine you’re writing software for a multicore system (well, nowadays, it’s typical that you will work on multicore systems, even on most embedded projects). As we mentioned in the introduction, running multiple code paths in parallel is not only safe but also desirable (why spend those dollars otherwise, right?). On the other hand, concurrent (parallel and simultaneous) code paths within which shared writable data (also known as shared state) is accessed in any manner is where you are required to guarantee that, at any given point in time, only one thread can work on that data at a time! This is key. Why? Think about it: if you allow multiple concurrent code paths to work in parallel on shared writable data, you’re asking for trouble: data corruption (a “data race”) can occur as a result. The following section, after covering some key points, will clearly illustrate the data race concept with...

bookmark search playlist download font-size

Change the font size

margin-width

Change margin width

day-mode

Change background colour

Close icon Search
Country selected

Close icon Your notes and bookmarks

Delete Bookmark

Modal Close icon
Are you sure you want to delete it?
Cancel
Yes, Delete