When I wrote about iOS 26 and WebGPU, I zoomed in on the platform-level shift Apple made — and why it matters for video-heavy apps. But what I didn’t talk about was how we actually build BrandLens under the hood today. Because making a video editor run seamlessly in a mobile browser — with no app download — takes a lot more than clever UI. It’s equal parts WebGL, FFmpeg, hardware encoding, and, yes, a fair amount of shader code.
So let’s get into the details of how it works.
Rendering in the Browser with WebGL + GLSL
Inside the BrandLens editor, everything you see — video previews, filters, overlays, even timeline scrubbing — is rendered with WebGL. On top of that, we use Three.js as our rendering engine for building the UI and scene graph.
The heavy lifting comes from GLSL shaders, which let us process frames directly on the GPU:
- Color Conversion – Raw camera feeds usually arrive in YUV, while WebGL prefers RGBA. Our fragment shaders handle this conversion in real time.
- Alpha Blending – For transparent overlays like logos or stickers, shaders compute per-pixel blending. This makes brand assets feel native to the video, not layered on afterward.
- Filter Application – Brightness, contrast, LUT-based grading — all done via fragment shaders so users can see the exact result while recording.
This shader-based approach means creators can shoot, edit, and preview complex video effects without waiting for server processing. And it keeps the browser smooth even on mid-range devices.
From Browser to Server: FFmpeg + NVENC
Once a user finishes recording, we send the raw and metadata-rich stream to our servers. That’s where FFmpeg takes over:
- Layering and Effects – We replicate the same filters and overlays applied in the live preview, ensuring WYSIWYG (what you see is what you get).
- Concatenation – Multiple clips and audio tracks are stitched into a single MP4.
- Encoding Acceleration – For speed-sensitive jobs, we lean on NVIDIA NVENC, offloading encoding to GPU hardware. This cuts processing time dramatically.
- CPU Processing for Assets – For less time-sensitive uploads, FFmpeg’s CPU path handles scaling, format conversion, and normalization.
This balance of GPU-accelerated encoding (NVENC) and CPU-based processing lets us optimize for both speed and scalability.
Alpha Video: Compatibility Meets Performance
One of the trickier problems we solved is handling alpha video (transparent layers). Instead of requiring green screens or specialized players, we:
- Encode alpha in the bottom plane of H.264 video for maximum compatibility.
- Process it with FFmpeg to ensure cross-device playback.
- Re-render in WebGL so the video can be composited live with other assets in the editor.
This workflow means users can create layered, branded content that looks professional but plays reliably everywhere.
Why WebGPU Changes the Game
Right now, GLSL + WebGL gives us enough horsepower to run advanced video editing in the browser. But WebGPU — rolling out across Chrome, Safari, and Firefox — takes it further:
- Compute Shaders – Unlike WebGL, WebGPU allows true compute shaders, letting us move jobs like filter stacking, frame analysis, and even partial encoding to the client device.
- Lower Overhead – Direct access to modern GPU APIs (Metal, Vulkan, Direct3D 12) reduces CPU bottlenecks. That means smoother playback and less battery drain.
- WGSL – WebGPU’s new shading language is designed for safety and portability, which reduces edge-case rendering bugs we’ve had to patch in GLSL across devices.
In practice, WebGPU will let us shift more processing client-side, reducing server load while giving users faster results. It’s not abstract “future tech” — it’s a roadmap to making video co-creation more scalable.
Other Companies Exploring Similar Paths
We’re not alone in this journey. A few notable examples:
- Shadertoy (link) – While not a video editor, it shows the raw creative potential of GLSL shaders in the browser, inspiring how we handle live filters.
- CL3VER (link) – Uses WebGL to deliver interactive 3D and media editing in the browser, similar to how we deliver video editing without an app.
- ROOT-EVE (research) – A scientific visualization engine being re-architected on WebGPU, showing how compute shaders can offload high-precision rendering to the GPU.
These examples confirm what we’ve seen: GPUs in the browser are no longer experimental — they’re the backbone of modern interactive media platforms.
Why It Matters for BrandLens Users
The end result of all this engineering is simple: a seamless experience for creators and organizations.
Campaign managers don’t need to know that we’re writing GLSL shaders or piping alpha planes through FFmpeg. What they care about is that their audience can scan a QR code, record a video, and have it look polished — without friction.
That’s why we’re excited about WebGPU. Because every advancement at the GPU layer translates into one thing: lowering the barrier for people to tell authentic stories on video.
👉 Want to see this tech in action? Check out our evergreen UGC campaign guide for how BrandLens combines engineering and strategy to scale video creation.