Skip to main content

3-D Shape Reconstruction

·
Artificial Intelligence Coursework Perception Stanford CS
James Braza
James Braza
Artificial Intelligence and Software

I began my graduate AI coursework in Autumn 2021 with Stanford’s CS221: Artificial Intelligence: Principles and Techniques. This was my first AI class, and for the class project I chose to reproduce the findings of the Point Completion Network (PCN) paper from Carnegie Mellon University.

For my dataset, I used Stanford’s completion3D dataset, as this was cited by most major shape reconstruction papers at the time.

Baselining Distance Metrics
#

A first experiment I ran was feeding in different percentages of the input partial point cloud (shown left below), and seeing how distant the reconstructed point cloud was from the ground truth point cloud.

Figure showing what the completion looks like with only 7.5% of the partial
Given only 7.5% of the input partial point cloud, we observe the reconstruction (middle) is poor. The distance metrics collected can serve as baseline measurements for low-quality reconstructions.
Chart showing ResNet50 performance on the home dataset
Given all 100% of the input partial point cloud, the reconstruction (middle) is much better, and this improvement is reflected in the lowered distance metrics.

Other Findings
#

  • Reliable reconstruction required at least 10% of the original object to be present
  • Point cloud distance metrics Chamfer Distance and Earth Mover’s Distance are not affected by number of points, as long as distribution is similar
  • PCN has a fundamental limitation that leads to minute details of the ground truth cloud not showing in the reconstruction

Source Code
#

jamesbraza/pcn

Code for CS221 Course Project working with PCN and ShapeNet data

Python
0
0