DA3 Multi-View Point Cloud

DA3_MultiViewPointCloud

Fuse multi-view depth maps into a single world-space point cloud. Uses predicted camera poses (extrinsics) to transform each view's depth into a common world coordinate system, then combines all points. Inputs: - depths: Batch of depth maps [N, H, W, 3] from Multi-View 3D node - images: Original images [N, H, W, 3] for RGB colors - extrinsics: Camera poses JSON from Multi-View node - intrinsics: Camera intrinsics JSON from Multi-View node - confidence: Optional confidence maps to filter low-confidence points - sky_mask: Optional sky segmentation to exclude sky pixels from point cloud - use_icp: Refine alignment with ICP (slower but potentially more accurate) Output: Single combined POINTCLOUD in world space. Note: Requires Main series or Nested model (with camera pose prediction). Mono/Metric models don't predict camera poses.

Pack: ComfyUI-DepthAnythingV3

custom_nodes.ComfyUI-DepthAnythingV3

Inputs (12)

NameTypeRequired
depthsIMAGErequired
imagesIMAGErequired
extrinsicsSTRINGrequired
intrinsicsSTRINGrequired
confidenceIMAGEoptional
sky_maskMASKoptional
confidence_thresholdFLOAToptional
downsampleINToptional
use_icpBOOLEANoptional
allow_around_1BOOLEANoptional
filter_outliersBOOLEANoptional
outlier_percentageFLOAToptional

Outputs (1)

NameType
pointcloudPOINTCLOUD