Stieglbauer, Felix
Supervisor:Prof. Gudrun Klinker
Advisor:Rudolph, Linda (@ge29tuw)
Submission Date:[created]


The process of scanning objects and creating digital copies of them gains increasing attention in the modern industry. But even though scanning itself is often automated, it is still necessary to define the volume which should be scanned manually. This poses various challenges in the environment of an industrial plant. A potentially hazardous environment with multifaceted structures requires a user with maybe no experience in 3D scanning to segment a volume accurately, which should then be comprehensible to the scanner. At least for this specific use case, common approaches to volume segmentation/selection fail to meet all necessary requirements for safety, efficiency, and functionality. Therefore, we developed an own algorithm, which segments the intended volume by intersecting the projection of user-drawn outlines on a small number of photos of the target object using augmented reality. Our implementation can successfully approximate various sizes of target volumes and delivers an appropriately detailed voxel structure. The pen-based input is easy and intuitive to use and requires the user to perform simple and clear tasks, while our algorithm takes the data, calculates the volume, and augments it into the scene. This gives visual feedback and shows flaws in prior annotations, which can then be resolved on the spot. Because of being designed around our use case, the application fulfills all requirements derived from the scenario. It delivers promising results and could be a simple, yet effective solution for our and similar problems.


Presentation Slides