learning to optimally segment point cloudspeiyunh/opcseg/slides.pdf · ! = 0.5m! = 0.25m *colors...

Post on 14-Nov-2020

0 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Learning to Optimally Segment Point Clouds

Peiyun Hu, David Held, Deva Ramanan

Carnegie Mellon University

Paper ID: 2977

Raw LiDAR ScansToday, most autonomous vehicles perceive the world through LiDAR point clouds.

Map-based Preprocessing

*We focus on a limited field of view in this work.

They often use pre-built maps to first filter out points from the background, then run clustering on points from the foreground to obtain object-level perception.

Object-level PerceptionHowever, it is often hard to set the right hyper-parameters for clustering. For example,

Euclidean Clustering with a large distance threshold tends to under-segments pedestrians.

*Colors flicker because the algorithm does not track objects across time.

Object-level PerceptionAnd Euclidean Clustering with a small distance threshold tends to over-segments vehicles.

*Colors flicker because the algorithm does not track objects across time.

No One-Fits-All Solution

= 2.0mε = 1.0mε

= 0.5mε = 0.25mε

*Colors across parameters do not indicate correspondence.

The best distance threshold often varies from scenario to scenario.

A Hierarchical Perspective

= 2.0mε = 1.0mε = 0.5mε = 0.25mε

Segmentations with different thresholds form a hierarchy, where nodes represent segments.

Learning Objectness Models

PointNet++ (Qi et al., NeurIPS’17)

We learn a model to predict an objectness score for each segment in the hierarchy.

Bad

Good

0.9

0.1

ObjectnessHow well a segment overlaps with ground truths.

Searching for OptimalityGiven a hierarchy of segments with scores, we search for the optimal segmentation.

Bad

Good

Bad

Good

defines

Optimal Worst-case SegmentationWe propose an efficient algorithm that produces optimal segmentation under this definition.

the worst segment score segmentation score

Bad

Good

defines

Average-case SegmentationWe also propose an efficient algorithm guided by average-case score.

average local segment score global segmentation score

Quantitative EvaluationProtocol: compute the percentage of objects that are under-segmented and over-segmented.

U = 1L

L

∑l=1

1[|Ci* ∩ Cgt

l ||Ci* |

< τU]

2 under-segmented pedestrians 1 over-segmented car

O = 1L

L

∑l=1

1[|Ci* ∩ Cgt

l ||Cgt

l |< τO]

Assumption: output is a valid partition.

Held et al., RSS’15

Segmentation Errors(1) As distance threshold increases, more under-segmentation and less over-segmentation.

(2) Our adaptive algorithm significantly outperforms each single-parameter baseline. (3) We also plot the lower-bound errors for the search space, showing room for improvement.

0

0.25

0.5

0.75

1

Under Over Total

13%5%8%

17%8%9%

19%6%

13%

28%

5%

23%35%

25%

9%

68%65%

3%

92%91%

1%

CC(0.25m) CC(0.5m) CC(1m) CC(2m) Ours(min) Ours(avg) CC(*)

No One-Fits-All Solution

= 2.0mε = 1.0mε

= 0.5mε = 0.25mε

*Colors across parameters do not indicate correspondence.

The best distance threshold often varies from scenario to scenario.

Algorithmic OutputOur algorithm can adaptively choose the best distance threshold for each scenario.

https://cs.cmu.edu/~peiyunh/opcseg

top related