This is required by webmachinelearning/webnn#226
To implement this pipeline, the semantic segmentation sample probably could execute following steps:
- Use mediacapture-transform to capture the
VideoFrame
- Import the
VideoFrame into a WebGPU texture (GPUExternalTexture)
- Execute WebGPU shader to do pre-processing and convert the
GPUExternalTexture to a GPUBuffer
- Compute the MLGraph with converted
GPUBuffer and the output is another GPUBuffer
- Execute WebGPU shader to do post-processing on the output
GPUBuffer and render the result to a canvas.
- Create a
VideoFrame from the CanvasImageSource and enqueue to the mediacapture-transform API's controller.