Cutting-edge Computer Vision technology is the heart of our company.


3D Reconstruction

Our bread and butter is the retrieval of 3D information from images using simultaneous localization and mapping (SLAM) and structure from motion (SfM) techniques. Our algorithms allow to reconstruct 3D scenes and camera motion from any image sequence captured with devices such as GoPros, smartphones, or integrated cameras of modern vehicles.
Reconstruction example video [youtube]


Depth-from-stereo has become ever more relevant over the last few years since more stereo cameras are integrated into smartphones and modern vehicles. We develop algorithms to produce accurate depth maps from stereo image pairs using dense matching, optical flow, and machine learning approaches.

Object Motion Estimation

While SfM and SLAM focus on static geometry, object segmentation techniques retrieve shape, type, and motion of moving objects. Among the approaches we use in our software are
  • variational approaches for video and motion segmentation
  • RANSAC and hypergraph based hypothesis generation
  • machine learning based localization, tracking, and classification
Vehicle localization demonstration video (5GCAR project)

Point Cloud Processing

As a result from reconstruction techniques, point clouds and meshes are used for 3D model generation. 3D models have measurable attributes, such as size, shape, and distances to other models. These attributes enable applications, e.g., for obstacle detection in automated driving scenarios. Object clustering and fitting provides the identification based on 3D points and specific models.
Point cloud from SfM example [youtube]


Semantic Scene Reconstruction

Many applications benefit from knowing not only where objects are, but also what they are. Semantic analysis, i.e., classifying the content of an image or a 3D scene, is an important tool to gain a more complete understanding of the environment. At VISCODA we know the benefits and limits of both data-driven and model-based approaches, enabling us to fuse cutting-edge machine learning with traditional Computer Vision to find the optimal solution for the given problem.
  • Machine learning techniques: powerful real-time performance for dedicated tasks, such as object detection, classification, and segmentation
  • Deep learning: large scale optimization for reference results
  • Data set generation: developing tools for automatic augmentation and semi-automatic ground truth generation.
Using our tools we created a benchmark for vehicle localization and maneuver planning for automated driving:
Vehicle Lane Merge Visual Benchmark

Camera Calibration

To reconstruct a highly accurate 3D scene from an image sequence it is paramount to know the specifics of the camera, such as the focal length or the type and amount of distortion introduced by the lens. We have considerable experience at quickly and accurately calibrating a wide range of different cameras, from almost-ideal pinhole to strongly distorted fisheye. Common tasks include
  • Guided calibration: computation of camera parameters using known calibration patterns for individual cameras and multi-camera setups
  • Registration: estimation of camera parameters based on known 3D geometry.
  • Self-calibration: optimization of calibration during acquisition. Automatically selected image features provide calibration landmarks.

Surface Reflection Estimation

For the reconstruction of surface artifacts, reflection models are incorporated into the scene reconstruction pipeline. Never give up your steering wheel without knowing the road condition!
Press Release InFusion

Resource Adaptive Scene Analysis and Reconstruction

For real-time applications, 3D scene reconstruction accuracy and object detection reliability depends on resource limitations such as hardware-specific computation power. Our objective is to achieve the optimal balance between accuracy and computation time through fusion of classical algorithms with novel machine learning techniques.
Some of these techniques were developed as part of the RaSar research project.