Hi @chrischoy
Really enjoyed going through the Synthia Temporal semantic segmentation pipeline.
I had a few doubts as to the data processing part for this.
I see for one particular sequence, all the point clouds are loaded(stacked) and passed to the model at one go. Yes there is a temporal voxelization, but what if you have really long sequences of dense point clouds.
For example, I am dealing with point cloud sequences with a total of approx. 170M points spread over 300 frames. Can this pipeline and model handle that kind of data?
My gpu is 24 GB Tesla m40
Thank you