Photogrammetric Data Classification for the Creation of Virtual Environments and Simulations

Meida Chen

Meida Chen University of Southern California
Lucio Soibelman University of Southern California
Ryan McAlinden University of Southern California
Andrew J. Marx University of Southern California
Steven D. Fleming University of Southern California

25F

With the rapid advancement of unmanned aerial vehicle (UAV) technology, the data collection process for creating 3D point clouds, meshes, and orthophotos of an outdoor scene using photogrammetric techniques that can be conducted with few resources (people, equipment) in a short period of time has become feasible. Institute for Creative Technologies researchers previously developed a UAV path-planning tool in which imagery data can be collected within two hours to model a 1km2 area, and the 3D meshes can be reconstructed within a few hours. Such a rapid 3D modeling process for an area of interest has been brought to the U.S. Army’s attention and motivated the One World Terrain (OWT) Project. One of the objectives of the OWT project is to provide small units with the organic capability to create geo-specific virtual environments for training and rehearsal purposes to support military operations. For more information about the OWT project, readers can refer to http://www.dronemapping.org/. However, such generated data do not contain semantic information for distinguishing between objects. To allow both user- and system-level interaction with the meshes, and enhance the visual acuity of the scene, classifying the generated point clouds, meshes, and associated orthophotos is a necessary step. The objective of this research is to design and develop a data classification and object information extraction framework for the data that are generated using the photogrammetric technique. Four sub-objectives that can be achieved using the proposed formwork including 1) classify orthophotos into different ground materials (i.e., dirt, road, and vegetation); 2) segment 3D point clouds/meshes into different top-level objects (i.e., ground, buildings, and trees); 3) identify individual tree locations; and 4) extract building footprints.
To classify ground materials, orthophotos were split into small images. Following that, deep learning technique is adopted during the classification process. The output of this process is a vector map, each point in the map representing a position where the image is rendered and contains the information of the classified material. The 3D point cloud is first segmented into the ground and non-ground points using progressive morphological filtering algorithm. Following that, buildings and trees are segmented through a supervised machine learning process based on the individual point characteristics. Note that, ground material information is also integrated as point characteristics during this segmentation process. Individual tree locations are identified using a k-means algorithm based on the segmented tree points. Building footprints are extracted from the segmented building points with the designed processes: 1) roof extracting process, 2) noise filtering process, and 3) boundaries extracting process. Since both 3D meshes and point clouds are in the same coordinate system, meshes are segmented according to the point cloud segmentation results. The proposed framework has been validated using several datasets that were collected in different locations in the United States, and the results show the potential of its practical usage.

14:45 Photogrammetric Data Classification for the Creation of Virtual Environments and Simulations, Meida Chen

January 30 @ 14:45
14:45 — 15:00 (15′)

Granite ABC

Meida Chen

Add to Google Calendar