Raw WebGPU
WebGPU is a new graphics API for the web that follows the architecture of modern computer graphics APIs such as Vulkan, DirectX 12, and Metal. This shift in paradigm for web graphics APIs allows users to take advantage of the same benefits native graphics APIs bring, faster applications thanks to the ability to keep the GPU busy with work, less graphics driver specific bugs, and the potential for new features should they be implemented in the future either by vendor extensions or in the specification itself.
WebGPU is arguably the most complex out of all rendering APIs on the web, though this cost is offset by both the increase in performance and guarantee of future support that the API provides. This post aims to demystify the API, making it easier to piece together how to write web apps that use it.
⚠️ Note: The WebGPU’s specification is relatively stable at this point. This blog post is based on their API as of August 21th 2021, let me know either here or on Twitter if anything’s changed and I’ll update it right away.
I’ve prepared a Github repo with everything you need to get started. We’ll walk through writing a WebGPU Hello Triangle application in TypeScript.
Check out my other post on WebGL for writing graphics applications with an older but widely supported web graphics API.
Setup
First install:
- Any Chromium based browser’s Canary build (such as Google Chrome or Microsoft Edge), and visit
about:flags
to enableunsafe-webgpu
. - Git
- Node.js
- A Text Editor such as Visual Studio Code.
Then type the following in any terminal such as VS Code’s Integrated Terminal.
# 🐑 Clone the repo
git clone https://github.com/alaingalvan/webgpu-seed# 💿 go inside the folder
cd webgpu-seed# 🔨 Start building the project
npm start
Refer to this blog post on designing web libraries and apps for more details on Node.js, packages, etc.
Project Layout
As your project becomes more complex, you’ll want to separate files and organize your application to something more akin to a game or renderer, check out this post on game engine architecture and this one on real time renderer architecture for more details.
├─ 📂 node_modules/ # 👶 Dependencies
│ ├─ 📁 gl-matrix # ➕ Linear Algebra
│ └─ 📁 ... # 🕚 Other Dependencies (TypeScript, Webpack, etc.)
├─ 📂 src/ # 🌟 Source Files
│ ├─ 📄 renderer.ts # 🔺 Triangle Renderer
│ └─ 📄 main.ts # 🏁 Application Main
├─ 📄 .gitignore # 👁️ Ignore certain files in git repo
├─ 📄 package.json # 📦 Node Package File
├─ 📄 license.md # ⚖️ Your License (Unlicense)
└─ 📃readme.md # 📖 Read Me!
Dependencies
- gl-matrix — A JavaScript library that allows users to write
glsl
like JavaScript code, with types for vectors, matrices, etc. While not in use in this sample, it's incredibly useful for programming more advanced topics such as camera matrices. - TypeScript — JavaScript with types, makes it significantly easier to program web apps with instant autocomplete and type checking.
- Webpack — A JavaScript compilation tool to build minified outputs and test our apps faster.
Overview
In this application we will need to do the following:
- Initialize the API — Check if
navigator.gpu
exists, and if it does, request aGPUAdapter
, then request aGPUDevice
, and get that device's defaultGPUQueue
. - Setup Frame Backings — create a
GPUCanvasContext
and configure it to receive aGPUTexture
for the current frame, as well as any other attachments you might need (such as a depth-stencil texture, etc.). CreateGPUTextureView
s for those textures. - Initialize Resources — Create your Vertex and Index
GPUBuffer
s, load your WebGPU Shading Language (WGSL) shaders asGPUShaderModule
s, create yourGPURenderPipeline
by describing every stage of the graphics pipeline. Finally, build yourGPUCommandEncoder
with what what render passes you intend to run, then aGPURenderPassEncoder
with all the draw calls you intend to execute for that render pass. - Render — Submit your
GPUCommandEncoder
by calling.finish()
, and submitting that to yourGPUQueue
. Refresh the canvas context by callingrequestAnimationFrame
. - Destroy — Destroy any data structures after you’re done using the API.
The following will explain snippets from that can be found in the Github repo, with certain parts omitted, and member variables (this.memberVariable
) declared inline without the this.
prefix so their type is easier to see and the examples here can work on their own.
Initialize API
Entry Point
To access the WebGPU API, you need to see if there exists a gpu
object in the global navigator
.
// 🏭 Entry to WebGPU
const entry: GPU = navigator.gpu;
if (!entry) {
throw new Error('WebGPU is not supported on this browser.')
}
Adapter
An Adapter describes the physical properties of a given GPU, such as its name, extensions, and device limits.
// ✋ Declare adapter handle
let adapter: GPUAdapter = null;// 🙏 Inside an async function...// 🔌 Physical Device Adapter
adapter = await entry.requestAdapter();
Device
A Device is how you access the core of the WebGPU API, and will allow you to create the data structures you’ll need.
// ✋ Declare device handle
let device: GPUDevice = null;// 🙏 Inside an async function...// 💻 Logical Device
device = await adapter.requestDevice();
Queue
A Queue allows you to send work asynchronously to the GPU. As of the writing of this post, you can only access a defaultQueue
from a given GPUDevice
.
// ✋ Declare queue handle
let queue: GPUQueue = null;
// 📦 Queue
queue = device.queue;
Frame Backings
Swapchain
In order to see what you’re drawing, you’ll need an HTMLCanvasElement
and to setup a Swapchain from that canvas. A Swapchain manages a series of textures you'll use to present your final render output to your <canvas>
element.
// ✋ Declare context handle
const context: GPUCanvasContext = null;
// ⚪ Create Context
context = canvas.getContext('webgpu');
// ⛓️ Configure Swapchain
const swapChainDesc: GPUSwapChainDescriptor = {
device: device,
format: 'bgra8unorm',
usage: GPUTextureUsage.OUTPUT_ATTACHMENT | GPUTextureUsage.COPY_SRC
};
context.configure(swapChainDesc);
Frame Buffer Attachments
When executing different passes of your rendering system, you’ll need output textures to write to, be it depth textures for depth testing or shadows, or attachments for various aspects of a deferred renderer such as view space normals, PBR reflectivity/roughness, etc.
Frame buffers attachments are references to texture views, which you’ll see later when we write our rendering logic.
// ✋ Declare attachment handles
let depthTexture: GPUTexture = null;
let depthTextureView: GPUTextureView = null;// 🤔 Create Depth Backing
const depthTextureDesc: GPUTextureDescriptor = {
size: [canvas.width, canvas.height, 1],
dimension: '2d',
format: 'depth24plus-stencil8',
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC
};depthTexture = device.createTexture(depthTextureDesc);
depthTextureView = depthTexture.createView();// ✋ Declare swapchain image handles
let colorTexture: GPUTexture = null;
let colorTextureView: GPUTextureView = null;colorTexture = swapchain.getCurrentTexture();
colorTextureView = colorTexture.createView();
Initialize Resources
Buffers
A Buffer is an array of data, such as a mesh’s positional data, color data, index data, etc. When rendering triangles with a raster based graphics pipeline, you’ll need 1 or more buffers of vertex data (commonly referred to as Vertex Buffer Objects or VBOs), and 1 buffer of the indices that correspond with each triangle vertex that you intend to draw (otherwise known as an Index Buffer Object or IBO).
// 📈 Position Vertex Buffer Data
const positions = new Float32Array([
1.0, -1.0, 0.0,
-1.0, -1.0, 0.0,
0.0, 1.0, 0.0
]);// 🎨 Color Vertex Buffer Data
const colors = new Float32Array([
1.0, 0.0, 0.0, // 🔴
0.0, 1.0, 0.0, // 🟢
0.0, 0.0, 1.0 // 🔵
]);// 📇 Index Buffer Data
const indices = new Uint16Array([ 0, 1, 2 ]);// ✋ Declare buffer handles
let positionBuffer: GPUBuffer = null;
let colorBuffer: GPUBuffer = null;
let indexBuffer: GPUBuffer = null;// 👋 Helper function for creating GPUBuffer(s) out of Typed Arrays
let createBuffer = (arr: Float32Array | Uint16Array, usage: number) => {
// 📏 Align to 4 bytes (thanks @chrimsonite)
let desc = { size: ((arr.byteLength + 3) & ~3), usage, mappedAtCreation: true };
let buffer = device.createBuffer(desc); const writeArray =
arr instanceof Uint16Array ? new Uint16Array(buffer.getMappedRange()) : new Float32Array(buffer.getMappedRange());
writeArray.set(arr);
buffer.unmap();
return buffer;
};positionBuffer = createBuffer(positions, GPUBufferUsage.VERTEX);
colorBuffer = createBuffer(colors, GPUBufferUsage.VERTEX);
indexBuffer = createBuffer(indices, GPUBufferUsage.INDEX);
Shaders
With WebGPU comes a new shader language: WebGPU Shading Language (WGSL):
Translating from other shading languages to WGSL is easy and straightforward. The language is similar to other shading languages like Metal Shading Language (MSL) and HLSL, with C++ style decorators such as [[location(0)]]
and Rust style struct definitions and functions:
Here’s the vertex shader source:
struct VSOut {
[[builtin(position)]] Position: vec4<f32>;
[[location(0)]] color: vec3<f32>;
};[[stage(vertex)]]
fn main([[location(0)]] inPos: vec3<f32>,
[[location(1)]] inColor: vec3<f32>) -> VSOut {
var vsOut: VSOut;
vsOut.Position = vec4<f32>(inPos, 1.0);
vsOut.color = inColor;
return vsOut;
}
Here’s the fragment shader source:
[[stage(fragment)]]
fn main([[location(0)]] inColor: vec3<f32>) -> [[location(0)]] vec4<f32> {
return vec4<f32>(inColor, 1.0);
}
Shader Modules
Shader Modules are plain text WGSL files that execute on the GPU when executing a given pipeline.
// 📄 Import or declare in line your WGSL code:
import vertShaderCode from './shaders/triangle.vert.wgsl';
import fragShaderCode from './shaders/triangle.frag.wgsl';
// ✋ Declare shader module handles
let vertModule: GPUShaderModule = null;
let fragModule: GPUShaderModule = null;
const vsmDesc = { code: vertShaderCode };
vertModule = device.createShaderModule(vsmDesc);
const fsmDesc = { code: fragShaderCode };
fragModule = device.createShaderModule(fsmDesc);
Uniform Buffer
You’ll often times need to feed data directly to your shader modules, and to do this you’ll need to specify a uniform. In order to create a Uniform Buffer in your shader, declare the following prior to your main function:
[[block]] struct UBO {
modelViewProj: mat4x4<f32>;
};
[[binding(0), group(0)]] var<uniform> uniforms: UBO;// ❗ Then in your Vertex Shader's main file,
// replace the 4th to last line with:
vsOut.Position = modelViewProj * vec4<f32>(inPos, 1.0);
Then in your JavaScript code, create a Uniform Buffer as you would with an index/vertex buffer.
You’ll want to use a library like gl-matrix in order to better manage linear algebra calculations such as matrix multiplication.
// 👔 Uniform Data
const uniformData = new Float32Array([ // ♟️ ModelViewProjection Matrix
1.0, 0.0, 0.0, 0.0
0.0, 1.0, 0.0, 0.0
0.0, 0.0, 1.0, 0.0
0.0, 0.0, 0.0, 1.0 // 🔴 Primary Color
0.9, 0.1, 0.3, 1.0 // 🟣 Accent Color
0.8, 0.2, 0.8, 1.0
]);// ✋ Declare buffer handles
let uniformBuffer: GPUBuffer = null;uniformBuffer = createBuffer(uniformData, GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);
Pipeline Layout
Once you have a uniform, you can create a Pipeline Layout to describe where that uniform will be when executing a Graphics Pipeline.
You have 2 options here, you can either let WebGPU create your pipeline layout for you, or get it from a pipeline that’s already been created:
// 👨🔧 Create your graphics pipeline...
// Then get your pipeline layout based on your WGSL shaders:
let bindGroupLayout: GPUBindGroupLayout = pipeline.getBindGroupLayout(0);
Or if you know the layout in advance, you can describe it yourself and use it during pipeline creation:
// ✋ Declare handles
let uniformBindGroupLayout: GPUBindGroupLayout = null;
let uniformBindGroup: GPUBindGroup = null;
let layout: GPUPipelineLayout = null;// 📁 Bind Group Layout
uniformBindGroupLayout = device.createBindGroupLayout({
bindings: [{
binding: 0,
visibility: GPUShaderStage.VERTEX,
type: "uniform-buffer"
}]
});// 🗄️ Bind Group
// ✍ This would be used when encoding commands
uniformBindGroup = device.createBindGroup({
layout: uniformBindGroupLayout,
bindings: [{
binding: 0,
resource: {
buffer: uniformBuffer
}
}]
});// 🗂️ Pipeline Layout
// 👩🔧 this would be used as a member of a GPUPipelineDescriptor
layout = device.createPipelineLayout({bindGroupLayouts: [uniformBindGroupLayout]});
Graphics Pipeline
A Graphics Pipeline describes all the data that’s to be fed into the execution of a raster based graphics pipeline. This includes:
- 🔣 Input Assembly — What does each vertex look like? Which attributes are where, and how do they align in memory?
- 🖍️ Shader Modules — What shader modules will you be using when executing this graphics pipeline?
- ✏️ Depth/Stencil State — Should you perform depth testing? If so, what function should you use to test depth?
- 🍥 Blend State — How should colors be blended between the previously written color and current one?
- 🔺 Rasterization — How does the rasterizer behave when executing this graphics pipeline? Does it cull faces? Which direction should the face be culled?
- 💾 Uniform Data — What kind of uniform data should your shaders expect? In WebGPU this is done by describing a Pipeline Layout.
// ✋ Declare pipeline handle
let pipeline: GPURenderPipeline = null;// ⚗️ Graphics Pipeline// 🔣 Input Assembly
const positionAttribDesc: GPUVertexAttribute = {
shaderLocation: 0, // [[attribute(0)]]
offset: 0,
format: 'float32x3'
};
const colorAttribDesc: GPUVertexAttribute = {
shaderLocation: 1, // [[attribute(1)]]
offset: 0,
format: 'float32x3'
};
const positionBufferDesc: GPUVertexBufferLayout = {
attributes: [positionAttribDesc],
arrayStride: 4 * 3, // sizeof(float) * 3
stepMode: 'vertex'
};
const colorBufferDesc: GPUVertexBufferLayout = {
attributes: [colorAttribDesc],
arrayStride: 4 * 3, // sizeof(float) * 3
stepMode: 'vertex'
};// 🌑 Depth
const depthStencil: GPUDepthStencilState = {
depthWriteEnabled: true,
depthCompare: 'less',
format: 'depth24plus-stencil8'
};// 🦄 Uniform Data
const pipelineLayoutDesc = { bindGroupLayouts: [] };
const layout = device.createPipelineLayout(pipelineLayoutDesc);// 🎭 Shader Stages
const vertex: GPUVertexState = {
module: vertModule,
entryPoint: 'main',
buffers: [positionBufferDesc, colorBufferDesc]
};// 🌀 Color/Blend State
const colorState: GPUColorTargetState = {
format: 'bgra8unorm'
};const fragment: GPUFragmentState = {
module: fragModule,
entryPoint: 'main',
targets: [colorState],
};// 🟨 Rasterization
const primitive: GPUPrimitiveState = {
frontFace: 'cw',
cullMode: 'none', topology: 'triangle-list'
};const pipelineDesc: GPURenderPipelineDescriptor = {
layout,
vertex,
fragment,
primitive,
depthStencil,
};pipeline = device.createRenderPipeline(pipelineDesc);
Command Encoder
Command Encoders encode all the draw commands you intend to execute in groups of Render Pass Encoders. Once you’ve finished encoding commands, you’ll receive a Command Buffer that you could submit to your queue.
In that sense a command buffer is analogous to a callback that executes draw functions on the GPU once it’s submitted to the queue.
// ✋ Declare command handles
let commandEncoder: GPUCommandEncoder = null;
let passEncoder: GPURenderPassEncoder = null;// ✍️ Write commands to send to the GPU
function encodeCommands() {
let colorAttachment: GPURenderPassColorAttachment = {
view: colorTextureView,
loadValue: { r: 0, g: 0, b: 0, a: 1 },
storeOp: 'store'
}; const depthAttachment: GPURenderPassDepthStencilAttachment = {
view: depthTextureView,
depthLoadValue: 1,
depthStoreOp: 'store',
stencilLoadValue: 'load',
stencilStoreOp: 'store'
}; const renderPassDesc: GPURenderPassDescriptor = {
colorAttachments: [colorAttachment],
depthStencilAttachment: depthAttachment
}; commandEncoder = device.createCommandEncoder(); // 🖌️ Encode drawing commands
passEncoder = commandEncoder.beginRenderPass(renderPassDesc);
passEncoder.setPipeline(pipeline);
passEncoder.setViewport(0, 0, canvas.width, canvas.height, 0, 1);
passEncoder.setScissorRect(0, 0, canvas.width, canvas.height);
passEncoder.setVertexBuffer(0, positionBuffer);
passEncoder.setVertexBuffer(1, colorBuffer);
passEncoder.setIndexBuffer(indexBuffer, 'uint16');
passEncoder.drawIndexed(3);
passEncoder.endPass(); queue.submit([commandEncoder.finish()]);
}
Render
Rendering in WebGPU is a simple matter of updating any uniforms you intend to update, getting the next attachments from your context, submitting your command encoders to be executed, and using the requestAnimationFrame
callback to do all of that again.
const render = () => {
// ⏭ Acquire next image from context
colorTexture = context.getCurrentTexture();
colorTextureView = colorTexture.createView(); // 📦 Write and submit commands to queue
encodeCommands(); // ➿ Refresh canvas
requestAnimationFrame(render);
};
Conclusion
WebGPU might be more difficult than other graphics APIs, but it’s an API that more closely aligns with the design of modern graphics cards, and as a result, should not only result in faster applications, but also applications that should last longer.
There were a few things I didn’t cover in this post as they would have been a beyond the scope of this post, such as:
- Matrices, be it for cameras or for transforming objects in the scene. gl-matrix is an invaluable resource there.
- A detailed overview of every possible state of a graphics pipeline. The type definitions are very helpful there.
- Blend Modes, it can be helpful to see this visually, Anders Riggelsen wrote a tool to see blend mode behavior with OpenGL here.
- Compute pipelines, review the specification or some of the examples below if you want to try that.
- Loading textures, this can be a bit involved, the examples below introduce this very well.
Additional Resources
Here’s a few articles/projects for WebGPU in no particular order:
- Dzmitry Malyshau wrote an article similar to this one introducing WebGPU in Mozilla FireFox.
- William Usher (@_wusher)’s article: From 0 to glTF with WebGPU.
- Warren Moore (@warrenm) wrote an article to help folks transition from the Metal API to WebGPU.
There’s also a number of open source projects including:
- Austin Eng’s WebGPU Samples
- Tarek Sherif (@tsherif)’s WebGPU Examples
- BabylonJS’s WebGPU Branch
- WebGPU’s Type Definitions
- WebGPU’s Conformance Tests
- Dawn — A C++ Implementation of WebGPU used to power Chromium’s implementation of WebGPU. Carl Woffenden released a Hello Triangle example with WebGPU and Dawn.
The specification for WebGPU and WebGPU Shading Language is also worth taking a look:
You can find all the source for this post in the GitHub Repo here.