• Co-Fusion: Real-time Segmentation, Tracking and Fusion of Multiple Objects

    In this paper we introduce a new dense SLAM system that takes a live stream of RGB-D images as input and segments the scene into different objects, using either motion or semantic cues, while simultaneously tracking and reconstructing their 3D shape in real time.

    Crucially, we use a multiple model fitting approach where each object can move independently from the background and still be effectively tracked and its shape fused over time using only the information from pixels associated with that object label. Previous attempts to deal with dynamic scenes have typically considered moving regions as outliers that are of no interest to the robot, and consequently do not model their shape or track their motion over time. In contrast, we enable the robot to maintain 3D models for each of the segmented objects and to improve them over time through fusion. As a result, our system has the benefit to enable a robot to maintain a scene description at the object level which has the potential to allow interactions with its working environment; even in the case of dynamic scenes.

  • Camera-Agnostic Monocular SLAM and Semi-dense 3D Reconstruction

    This paper discusses localisation and mapping techniques based on a single camera. After introducing the given problem, which is known as monocular SLAM, a new camera agnostic monocular SLAM system (CAM-SLAM) is presented. It was developed within the scope of this work and is inspired by recently proposed SLAM-methods. In contrast to most other systems, it supports any central camera model such as for omnidirectional cameras. Experiments show that CAM-SLAM features similar accuracy as state-of-the-art methods, while being considerably more flexible.

  • Master-Thesis: Camera-Agnostic Monocular SLAM and Semi-Dense 3D Reconstruction

    This thesis discusses localization and mapping techniques based on a single camera. After introducing the given problem, which is known as monocular SLAM, an overview of related publications is provided. Relevant mathematical principles are presented and subsequently used to compare available methods in the abstract. During this comparison, state-of-the-art methods are analysed thoroughly. Various camera models are studied with emphasis on omnidirectional cameras, and corresponding techniques are investigated. Employing omnidirectional cameras imposes special requirements that are not met by common SLAM-methods. In this thesis, techniques that are applicable for traditional as well as omnidirectional cameras are evaluated. A new camera agnostic monocular SLAM system (CAM-SLAM) is presented. It was developed within the scope of this thesis and is inspired by recently proposed SLAM-methods. In contrast to most other systems, it supports any central camera model. Experiments show that CAM-SLAM features similar accuracy as state-of- the-art methods, while being considerably more flexible.

  • Bachelor-Thesis: Real-Time Hair Simulation and Rendering

    My bachelor thesis presents real-time techniques for virtual hair generation, simulation and rendering and discusses a prototype which has been implemented within the scope of this thesis. After examining properties of human hair in section 2, section 3 outlines simulation methods and explains mass-spring systems. All simulation methods are based on particles, which are used to generate geometry (section 4) and to subsequently render hair strands. The Kajiya and Kay’s, Marschner and an artist friendly shading system are reviewed in section, before describing shadow and self-shadowing techniques, such as deep opacity maps in section 5. While the subjects of the first sections are platform-independent methods and properties, section 6 presents DirectX 11-oriented implementation details. Finally, the prototype is used to analyse the quality as well as the efficiency of covered techniques in section 7.

  • 3DCIS: A Real-time Browser-rendered 3D Campus Information System Based On WebGL

    Most of the current real-time 3D web applications are only available with plug-ins as Flash or additional software as Java. Avoiding this drawback, the new WebGL technology provides hardware accelerated computer graphics for web browsers without requiring plug-ins. Using Blender, WebGL, the WebGL-expanding framework GLGE, and an in-house developed exporter B2G from Blender to GLGE we have realized the cutting-edge web application 3DCIS based on a complex 3D model of our campus. With 3DCIS one is able to explore the campus interactively and to become acquainted with local persons and institutions. Textual information about buildings, rooms and persons is linked with 3D model information to enhance the intuitive experience of 3DCIS.