Here's what Wikipedia has to say about compute shaders:
In computing, a compute kernel is a routine compiled for high throughput accelerators (such as graphics processing units (GPUs), digital signal processors (DSPs) or field-programmable gate arrays (FPGAs)), separate from but used by a main program (typically running on a central processing unit). They are sometimes called compute shaders, sharing execution units with vertex shaders and pixel shaders on GPUs, but are not limited to execution on one class of device, or graphics APIs.
Note that in Babylon.js this is a WebGPU feature only (starting at v5.0), WebGL does not support compute shaders.
You can query the engine for compute shader support by doing:
engine.getCaps().supportComputeShaders
Creating a compute shader
Use the ComputeShader
class to create a computer shader, much like you would use ShaderMaterial
to create a material for some custom shader code:
const cs1 = new BABYLON.ComputeShader("myCompute", engine, { computeSource: copyTextureComputeShader }, { bindingsMapping:
{
"dest": { group: 0, binding: 0 },
"src": { group: 0, binding: 2 }
}
});
(Video) Modern OpenGL Tutorial - Compute Shaders
copyTextureComputeShader
is the shader code and bindingsMapping
an object that maps an input variable name (from the shader source code) to its binding location: see below for more explanations.
Once you have created the compute shader instance, you can set the input values by using the appropriate methods (see ComputeShader class):
cs1.setTexture("src", src);
cs1.setStorageTexture("dest", dest);
A compute shader can be executed by calling one of the dispatch()
or dispatchWhenReady()
methods.
Types of variables passed to compute shaders
Variables you pass to a compute shader can be of the following types:
- texture. Use
ComputeShader.setTexture()
to pass a regular texture to the shader. - storage texture. A storage texture is a texture you can write to from your compute shader. Note that you can't read from a storage texture, only write to it. If you wrote something in a storage texture and need to read from it in a shader, simply pass it as a regular texture. In Babylon.js, storage textures are created as regular textures but pass a special flag for the
creationFlag
parameter:BABYLON.Constants.TEXTURE_CREATIONFLAG_STORAGE
. There are also two helper methods that you can use to create storage textures:BABYLON.RawTexture.CreateRGBAStorageTexture()
andBABYLON.RawTexture.CreateRStorageTexture()
. UseComputeShader.setStorageTexture()
to pass a storage texture to the shader. - uniform buffer. It's a buffer you can create by instantiating the
UniformBuffer
class and that can be used to pass some constant values to the shader side (values that still can be updated in the course of your program - they are constants inside the shader). Note that you need to first create the layout of this buffer by calling theaddUniform
method in the order the properties appear in the buffer in the shader code! The last point is important as the layout you create must match the layout of the buffer as used in the shader code. Once you have created the layout, you can set some values through calls to theupdateXXX()
methods. Once you are ready to update the buffer on the GPU side, callupdate()
. UseComputeShader.setUniformBuffer()
to pass a uniform buffer to the shader. - storage buffer. This is an arbitrary buffer that you can use to read or write values. Use the
StorageBuffer
class to create such buffers. UseComputeShader.setStorageBuffer()
to pass a storage buffer to the shader. - sampler. This is a custom texture sampler object that you can use to sample a texture. Use the
TextureSampler
class to create one andComputeShader.setTexturesSampler()
to pass a texture sampler to the shader. See the next section for more explanations about texture samplers.
Shader language and input bindings
The compute shader must be written in WGSL, which is the shader language used by WebGPU.
As GLSL shaders can be stored in ShaderStore.ShadersStore
, WGSL shaders can be stored in ShaderStore.ShadersStoreWGSL
and you can pass the name of the key used to store the shader in this object to the ComputeShader
constructor. You can also directly pass the shader code to the constructor (as done in the example above).
Browsers do not currently support reflection for WGSL shaders, meaning we are not able to automatically retrieve the binding and group values of the input variables, as seen here:
@group(0) @binding(0) var dest : texture_storage_2d<rgba8unorm, write>;
@group(0) @binding(1) var srcSampler : sampler;
@group(0) @binding(2) var src : texture_2d<f32>;
(Video) Compute shader 101
That's why you need to provide those bindings yourself when creating a new ComputeShader
instance:
const cs1 = new BABYLON.ComputeShader("myCompute", engine, { computeSource: copyTextureComputeShader }, { bindingsMapping:
{
"dest": { group: 0, binding: 0 },
"src": { group: 0, binding: 2 }
See AlsoHow Much To Wallpaper A Room?High Blood Sugar: 13 Reasons Your Glucose Levels Are RisingHow to Print Photos: The Ultimate Guide to Printing Gorgeous Images19 Types of Retaining Wall Materials and Designs for Your Yard}
});
Note that for a (sampled) texture variable as src
in the example above, you can instruct the system to automatically bind the sampler corresponding to the texture. To do that, you must declare the sampler with a binding value equal to the binding value of the texture minus 1 and you should pass true
as the 3rd parameter of the setTexture()
call (or don't pass anything, as true
is the default value). In this case, you must not add this sampler in the bindingsMapping
object.
If you don't need/want to auto-bind the sampler corresponding to a texture, you can instruct the system not to bind the sampler by passing false
as the 3rd parameter to ComputeShader.setTexture()
.
You can also bind your own sampler by creating one (see the TextureSampler class) and use the ComputeShader.setTextureSampler()
method:
const cs1 = new BABYLON.ComputeShader("myCompute", engine, { computeSource: copyTextureComputeShader }, { bindingsMapping:
{
(Video) Coding Adventure: Compute Shaders"dest": { group: 0, binding: 0 },
"srcSampler": { group: 0, binding: 1 },
"src": { group: 0, binding: 2 }
}
});
const sampler = new BABYLON.TextureSampler().setParameters();
cs1.setTextureSampler("samplerSrc", sampler);
In that case, you must add this sampler in the bindingsMapping
object.
Examples
Simple compute shaders
Simple compute shaders
This PG creates 3 compute shaders:
- the first one is loading a texture and copying it into another texture by means of a compute shader. This texture is then applied on the ground. Warning: it is for demonstration purpose only, don't use this compute shader in real code, a simple copy buffer would be enough to achieve the same thing!
- the second one is clearing a texture with a constant value and applying it to the sphere. Once again, don't use it in real code! Note that we don't use
addUniform
to create the layout of the uniform buffer: when there's a single property in a uniform buffer, callingupdateXXX()
to set the value of the property will also create the layout. - the third one is computing the multiplication of two matrices. It uses 3 storage buffers, 2 for the two input matrices and one for the result. The result buffer is then read and dumped to the console log. It is a direct port of Get started with GPU Compute on the Web.
Image Blur
Blur compute shader
This is a direct port of the WebGPU sample imageBlur.
Note that in the sample we are calling dispatchWhenReady()
for the first compute shader to be sure the compute effect is ready before going on, but for the next shaders we simply call dispatch()
because as they are using the same shader code we are sure the effect will be ready (as it is the same one).
Compute Boids
Boids compute shader
This is a direct port of the WebGPU sample computeBoids.
It is a compute shader that updates two ping-pong buffers which store particle data. The data is used to draw instanced particles.
Note that it is using the (new in 5.0) Mesh.forcedInstanceCount
property to set an instance count for a mesh that has no instances (InstancedMesh
) but that we would like to render multiple times because we provide the appropriate vertex buffers manually.
As the storage buffers we use to compute the particle positions and velocities will be used as (instanced) vertex buffers, we must flag them as BUFFER_CREATIONFLAG_VERTEX
at creation time (see the new BABYLON.StorageBuffer(...)
calls in the code).
Hydraulic erosion
Hydraulic erosion
This is a port of the great project Hydraulic-Erosion: all credits to sebastlague@gmail.com!
The generation of the terrain and the simulation of the erosion is done by using two different compute shaders.
Note that this sample also works in WebGL2 where compute shaders are not available but you should be careful when setting the parameters: don't raise too much Iterations, Radius, Max lifetime, Resolution else you may stuck your browser as now the terrain generation and erosion processes are handled on the CPU side!
Slime simulation
Slime simulation
This is a port of the great project Slime-Simulation: all credits to sebastlague@gmail.com!
The implentation in WGSL is a little less pretty than the HLSL one because at the time of this writing WebGPU does not support read/write textures, so we had to use a storage buffer for the TrailMap
texture. That means we need some copy buffer to texture and texture to buffer functions and we have to do 4 reads from TrailMap
instead of a single one when we need to get a vec4
(see the code), which is likely less performant than its HLSL counterpart.
Ocean demo
Ocean demo
This is a port of the great project FFT-Ocean: all credits to Ivan Pensionerov (https://github.com/gasgiant)!
This sample uses a lot of compute shader runs: there are around 200-250 compute shaders running each frame! Use F8 to show/hide the GUI (after you click anywhere in the rendering area to give focus to the canvas) and WASD to move.
Further reading
FAQs
How does a compute shader work? ›
The compute shader itself defines the data "space" it operates on. An OpenGL function can be used to define the amount of executions that also initiates the execution of the compute operation. The computer shader does not have user-defined inputs or any outputs as known from the other shaders.
How do you write compute shaders in unity? ›Well computing let's create a new c-sharp. Class called compute shader. Test. Here let's create a
What language is compute shaders? ›Compute shader assets
They are written in DirectX 11 style HLSL language, with minimal amount of #pragma compilation directives to indicate which functions to compile as compute shader kernels. Note that the example above does not do anything remotely interesting, it just fills the output texture with red.
This turns out to be 50% faster than the fragment shader! (It can be embiggened, which you may wish to do if you're having trouble reading the text.)
Does OpenGL have compute shaders? ›In most respects, a Compute Shader is identical to all other OpenGL shaders, with similar status, uniforms, and other such properties. It has access to many of the same data as all other shader types, such as textures, image textures, atomic counters, and so on.
Can Vulkan be used for compute? ›Vulkan provides a ubiquitous way to access GPUs by many hardware vendors across different form factors on various platforms. The great reachability not only benefits graphics rendering; it can also be leveraged for general compute, given that Vulkan is both a graphics and compute API.
What are compute shaders good at? ›A Compute Shader is a Shader Stage that is used entirely for computing arbitrary information. While it can do rendering, it is generally used for tasks not directly related to drawing triangles and pixels.
How do you write a compute shader in Vulkan? ›To execute a compute shader we need to: Create a descriptor set that has two VkDescriptorBufferInfo's for each of our buffers (one for each binding in the compute shader). Update the descriptor set to set the bindings of both of the VkBuffer's we created earlier. Create a command pool with our queue family index.
Does unity use HLSL? ›In Unity, you write shader programs using the HLSL programming language.
Is HLSL or GLSL better? ›GLSL seem a beter one if you're going full OpenGL. HLSL if you're going exclusively on Microsoft platforms. Now first developping in HLSL for windows to use DirectX and then convert to GLSL for linux and mac could be the better solution to be sure of performance and have the larger set of shader features available.
Does Vulkan use GLSL? ›
Vulkan does not directly consume shaders in a human-readable text format, but instead uses SPIR-V as an intermediate representation. This opens the option to use shader languages other than e.g. GLSL, as long as they can target the Vulkan SPIR-V environment.
What is the best shading language? ›High-Level Shading Language (HLSL) is arguably the best language for writing shaders, as it's very well supported, separates samplers and textures (unlike GLSL), and has common language features like: namespaces. template generics.
What is a GPU shader? ›A shader is a piece of code that is executed on the Graphics Processing Unit (GPU), usually found on a graphics card, to manipulate an image before it is drawn to the screen. Shaders allow for various kinds of rendering effect, ranging from adding an X-Ray view to adding cartoony outlines to rendering output.
What is vertex and fragment shader? ›There are several kinds of shaders, but two are commonly used to create graphics on the web: Vertex Shaders and Fragment (Pixel) Shaders. Vertex Shaders transform shape positions into 3D drawing coordinates. Fragment Shaders compute the renderings of a shape's colors and other attributes.
What does fragment shader do? ›A Fragment Shader is the Shader stage that will process a Fragment generated by the Rasterization into a set of colors and a single depth value. The fragment shader is the OpenGL pipeline stage after a primitive is rasterized. For each sample of the pixels covered by a primitive, a "fragment" is generated.
Does WebGL support compute shaders? ›0), WebGL does not support compute shaders.
Who created OpenGL? ›Mark Segal and Kurt Akeley authored the OpenGL 1.0 specification which tried to formalize the definition of a useful graphics API and made cross platform non-SGI 3rd party implementation and support viable.
What is Hlsl used for? ›HLSL is the C-like high-level shader language that you use with programmable shaders in DirectX. For example, you can use HLSL to write a vertex shader, or a pixel shader, and use those shaders in the implementation of the renderer in your Direct3D application.
Is Vulkan based on OpenGL? ›...
OpenGL vs. Vulkan.
OpenGL | Vulkan |
---|---|
One single global state machine | Object-based with no global state |
Vulkan and OpenGL communicate only with graphics hardware. OpenCL can execute code on both graphics hardware, CPUs, and asymmetric compute add-on hardware. That said, Vulkan (and OpenGL 4. x) can substitute for or supplement OpenCL by using graphics hardware compute shaders.
Why was Vulkan created? ›
Vulkan is supposed to provide numerous advantages over other GPU APIs, enabling superior cross-platform support, better support for multithreaded processors, lower CPU load, and a pinch of OS agnosticism.
What are compute shaders good at? ›A Compute Shader is a Shader Stage that is used entirely for computing arbitrary information. While it can do rendering, it is generally used for tasks not directly related to drawing triangles and pixels.
What does compute shaders do in Dota 2? ›no changes in render quality, but it does free up GPU resources for better render time. or to put it in another word, more frame rate, lower frame delay (input lag) and fewer stutters.
How do you write a compute shader in Vulkan? ›To execute a compute shader we need to: Create a descriptor set that has two VkDescriptorBufferInfo's for each of our buffers (one for each binding in the compute shader). Update the descriptor set to set the bindings of both of the VkBuffer's we created earlier. Create a command pool with our queue family index.
Does WebGL support compute shaders? ›0), WebGL does not support compute shaders.
Does OpenGL support HLSL? ›The GLSL that is referred to herein is compatible with OpenGL ES 2.0; the HLSL is compatible with Direct3D 11.
What is a unity compute shader? ›Compute Shaders are programs that run on the graphics card, outside of the normal rendering pipeline. They can be used for massively parallel GPGPU algorithms, or to accelerate parts of game rendering.
What is Vulkan compute? ›August 2020) Vulkan is a low-overhead, cross-platform API, open standard for 3D graphics and computing. Vulkan targets high-performance real-time 3D graphics applications, such as video games and interactive media.
How can I speed up my Dota 2 performance? ›Tips to raise FPS in Dota 2
It will require a restart, but switching over is a great way to instantly raise your Dota 2 FPS. Another tip is to run the game in fullscreen mode instead of borderless windowed. It will make tabbing out take longer, but it can slightly improve performance on many systems.
Dota 2 FPS Boost - Improve Performance for Windows 10
Why does my FPS drop in Dota 2? ›
Game file corruption usually leads to FPS drops in Dota 2. To make sure that if any game files are missing or corrupted, follow the following instructions to verify the integrity of the game files: In Steam, navigate to the LIBRARY tab and right-click on Dota 2. Then select Properties.
What shader language does Vulkan use? ›GLSL. OpenGL Shading Language (GLSL) is the standard shader programming language for Khronos APIs such as Vulkan, OpenGL 4.
Does Vulkan support compute shaders? ›Compute shaders in Vulkan have first class support in the API. Compute shaders give applications the ability to perform non-graphics related tasks on the GPU. This sample assumes you have some knowledge of how compute shaders work in other related graphics APIs such as OpenGL ES.
Does Vulkan use OpenCL? ›Vulkan and OpenGL communicate only with graphics hardware. OpenCL can execute code on both graphics hardware, CPUs, and asymmetric compute add-on hardware. That said, Vulkan (and OpenGL 4. x) can substitute for or supplement OpenCL by using graphics hardware compute shaders.
Is WebGL obsolete? ›Unity Technologies
In version 2021.2, Unity marked support for the WebGL 1 Graphics API as deprecated.
WebGPU is the working name for a future web standard and JavaScript API for accelerated graphics and compute, aiming to provide "modern 3D graphics and computation capabilities". It is developed by the W3C GPU for the Web Community Group with engineers from Apple, Mozilla, Microsoft, Google, and others.
How does OpenGL access the GPU? ›In OpenGL, an image is sent to the GPU through Texture Objects. This Texture Object is placed in a Texture-Unit. The fragment shader references this texture-unit through a Sampler. The fragment shader then uses the U-V coordinates along with the sampler data to properly map an image to the game character.