DA3 Multi-View Point Cloud
DA3_MultiViewPointCloud
Fuse multi-view depth maps into a single world-space point cloud. Uses predicted camera poses (extrinsics) to transform each view's depth into a common world coordinate system, then combines all points. Inputs: - depths: Batch of depth maps [N, H, W, 3] from Multi-View 3D node - images: Original images [N, H, W, 3] for RGB colors - extrinsics: Camera poses JSON from Multi-View node - intrinsics: Camera intrinsics JSON from Multi-View node - confidence: Optional confidence maps to filter low-confidence points - sky_mask: Optional sky segmentation to exclude sky pixels from point cloud - use_icp: Refine alignment with ICP (slower but potentially more accurate) Output: Single combined POINTCLOUD in world space. Note: Requires Main series or Nested model (with camera pose prediction). Mono/Metric models don't predict camera poses.
Pack: ComfyUI-DepthAnythingV3
custom_nodes.ComfyUI-DepthAnythingV3
Inputs (12)
| Name | Type | Required |
|---|---|---|
| depths | IMAGE | required |
| images | IMAGE | required |
| extrinsics | STRING | required |
| intrinsics | STRING | required |
| confidence | IMAGE | optional |
| sky_mask | MASK | optional |
| confidence_threshold | FLOAT | optional |
| downsample | INT | optional |
| use_icp | BOOLEAN | optional |
| allow_around_1 | BOOLEAN | optional |
| filter_outliers | BOOLEAN | optional |
| outlier_percentage | FLOAT | optional |
Outputs (1)
| Name | Type |
|---|---|
| pointcloud | POINTCLOUD |