with S. Zorzi, E. Maset and F. Crosilla
This work introduces an innovative algorithm to perform LiDAR point-cloud classification, that relies on Convolutional Neural Networks and takes advantage of full-waveform data registered by modern laser scanners. The proposed method consists of two steps. First, a simple CNN is used to pre-process each waveform, providing a compact representation of the data. Exploiting the coordinates of the points associated to the waveforms, output vectors generated by the first CNN are then mapped into an image, that is subsequently segmented by a Fully Convolutional Network: a label is assigned to each pixel and, consequently, to the point falling in the pixel. In this way, spatial positions and geometrical relationships between neighbouring data are taken into account.
First, the waveform classifier (a standard CNN) predicts the point class only exploiting full-waveform data. Predictions are then mapped into an image, together with the height information derived from the 3D coordinates of the points. The resulting multi-channel image is then processed by a FCN (U-net) that refines predictions using spatial information.
The dataset used in the experiments is available here by courtesy of Helica srl. It is composed by txt files where each row in a file corresponds to a measured point and contains: 3D coordinates of the point, label of the point, waveform registerd by the instrument, composed of 160 values (zero padded).
⚠ Warning: the dataset is 3.69 GB.