
Learning Vulkan
By :

Let's discuss the Vulkan programming model in detail. Here, the end user, considering he or she is a total beginner, will be able to understand the following concepts:
The following diagram shows a top-down approach of the Vulkan application programming model; we will understand this process in detail and also delve into the sublevel components and their functionalities:
When a Vulkan application starts, its very first job is the initialization of the hardware. Here, the application activates the Vulkan drivers by communicating with the loader. The following diagram represents a block diagram of a Loader with its subcomponents:
Loader: A loader is a piece of code used in the application start-up to locate the Vulkan drivers in a system in a unified way across platforms. The following are the responsibilities of a loader:
The Vulkan application first performs a handshake with the loader library and initializes the Vulkan implementation driver. The loader library loads Vulkan APIs dynamically. The loader also offers a mechanism that allows the automatic loading of specific layers into all Vulkan applications; this is called an Implicit-Enabled layer.
Once the loader locates the drivers and successfully links with the APIs, the application is responsible for the following:
Once the Vulkan implementation driver is located by the loader, we are good to draw something using the Vulkan APIs. For this, we need an image to perform the drawing task and put it on the presentation window to display it:
Building a presentation image and creating windows are very platform-specific jobs. In OpenGL, windowing is intimately linked; the window system framebuffer is created along with context/device. The big difference from GL here is that context/device creation in Vulkan needn't involve the window system at all; it is managed through Window System Integration (WSI).
WSI contains a set of cross-platform windowing management extensions:
WSI supports multiple windowing systems, such as Wayland, X, and Windows, and it also manages the ownership of images via a swapchain.
WSI provides a swapchain mechanism; this allows the use of multiple images in such a way that, while the window system is displaying one image, the application can prepare the next.
The following screenshot shows the double-buffering swap image process. It contains two images named First Image and Second Image. These images are swapped between Application and Display with the help of WSI:
WSI works as an interface between Display and Application. It makes sure that both images are acquired by Display and Application in a mutually exclusive way. Therefore, when an Application works on First Image, WSI hands over Second Image to Display in order to render its contents. Once the Application finishes the painting First image, it submits it to the WSI and in return acquires Second Image to work with and vice-versa.
At this point, perform the following tasks:
CreateWindow
method in the Windows OS)Setting up resources means storing data into memory regions. It could be any type of data, for example, vertex attributes, such as position, color, or image type/name. Certainly, the data has resided somewhere in the memory for Vulkan to access it.
Unlike OpenGL, which manages the memory behind the scenes using hints, Vulkan provides full low-level access and control of the memory. Vulkan advertises the various types of available memory on the physical device, providing the application with a fine opportunity to manage these different types of memory explicitly.
Memory heaps can be categorized into two types, based upon their performance:
Memory heaps can be further divided based upon their memory type configurations:
In Vulkan, resources are explicitly taken care of by the application with exclusive control of memory management. The following is the process of resource management:
Physical memory allocation is expensive; therefore, a good practice is to allocate a large physical memory and then suballocate objects.
In contrast, OpenGL resource management does not offer granular control over the memory. There is no conception of host and device memory; the driver secretly does all of the allocation in the background. Also, these allocation and suballocation processes are not fully transparent and might change from one driver to another. This lack of consistency and hidden memory management cause unpredictable behavior. Vulkan, on the other hand, allocates the object right there in the chosen memory, making it highly predictable.
Therefore, during the resource setup stage, you need to perform the following tasks:
A pipeline is a set of events that occur in a fixed sequence defined by the application logic. These events consist of the following: supplying the shaders, binding them to the resource, and managing the state:
A descriptor set is an interface between resources and shaders. It is a simple structure that binds the shader to the resource information, such as images or buffers. It associates or binds a resource memory that the shader is going to use. The following are the characteristics associated with descriptor sets:
Updating or changing a descriptor set is one of the most performance-critical paths in rendering Vulkan. Therefore, the design of a descriptor set is an important aspect in achieving maximum performance. Vulkan supports logical partitioning of multiple descriptor sets at the scene (low frequency updates), model (medium frequency updates), and draw level (high frequency updates). This ensures that the high frequency update descriptor does not affect low frequency descriptor resources.
The only way to specify shaders or compute kernels in Vulkan is through SPIR-V. The following are some characteristics associated with it:
A physical device contains a range of hardware settings that determine how the submitted input data of a given geometry needs to be interpreted and drawn. These settings are collectively called pipeline states. These include the rasterizer state, blend state, and depth stencil state; they also include the primitive topology type (point/line/triangle) of the submitted geometry and the shaders that will be used for rendering. There are two types of states: dynamic and static. The pipeline states are used to create the pipeline object (graphics or compute), which is a performance-critical path. Therefore, we don't want to create them again and again; we want to create them once and reuse them.
Vulkan allows you to control states using pipeline objects in conjunction with Pipeline Cache Object (PCO) and the pipeline layout:
Pipeline caches are opaque, and the details of their use by the driver are unspecified. The application is responsible for persisting the cache if it wishes to reuse it across runs and for providing a suitable cache at the time of pipeline creation if it wishes to reap potential benefits.
In the pipeline management stage, this is what happens:
Recording commands is the process of command buffer formation. Command buffers are allocated from the command pool memory. Command pools can also be used for multiple allocations. A command buffer is recorded by supplying commands within a given start and end scope defined by the application. The following diagram illustrates the recording of a drawing command buffer, and as you can see, it comprises many commands recorded in the top-down order responsible for object painting.
Note that the commands in the command buffer may vary with the job requirement. This diagram is just an illustration that covers the most common steps performed while drawing primitives.
The major parts of drawing the are covered here:
The creation of a command buffer is an expensive job; it considers the most performance-critical path. It can be reused numerous times if the same work needs to happen on many frames. It can be resubmitted without needing to re-record it. Also, multiple command buffers can be produced simultaneously using multiple threads. Vulkan is specially designed to exploit multithreaded scalability. Command pools ensure there is no lock contention if used in a multithreaded environment.
The following diagram shows a scalable command buffer creation model with a multicore and multithreading approach. This model provides true parallelism with multicore processors.
Here, each thread is made to utilize a separate command buffer pool, which allocates either single or multiple command buffers, allowing no fights on resource locks.
Once command buffers are built, they can be submitted to a queue for processing. Vulkan exposes different types of queue to the application, such as the graphics, DMA/transfer, or compute queues. Queue selection for submission is very much dependent upon the nature of the job. For example, graphics-related tasks must be submitted to the graphics queue. Similarly, for compute operations, the compute queue will be the best choice. The submitted jobs are executed in an asynchronous fashion. Command buffers can be pushed into separate compatible queues allowing parallel execution. The application is responsible for any kind of synchronization within command buffers or between queues, even between the host and device themselves.
Queue submission performs the following jobs:
Change the font size
Change margin width
Change background colour