TECHNOLOGY
Cutting-edge Computer Vision technology is the heart of our company.

3D Reconstruction
Reconstruction example video [youtube]
Road Surface and Wetness Estimation
For the camera-based determinination of the current road condition, classification and uncertainty estimation using machine learning is incorporated.
Never give up your steering wheel without knowing the road condition!
Press Release InFusion
Paper presented at CVPR Workshop on Autonomous Driving, 06/2022, Poster(.pdf)
Check the RoadSaW Dataset
Object Motion Estimation
- panoptic video segmentation
- variational approaches for video and motion segmentation
- RANSAC and hypergraph based hypothesis generation
- machine learning based localization, tracking, and classification

Camera Calibration
- Guided calibration: computation of camera parameters using known calibration patterns for individual cameras and multi-camera setups
- Registration: estimation of camera parameters based on known 3D geometry.
- Self-calibration: optimization of calibration during acquisition. Automatically selected image features provide calibration landmarks.

Stereo

Point Cloud Processing
As a result from reconstruction techniques, point clouds and meshes are used for 3D model generation. 3D models have measurable attributes, such as size, shape, and distances to other models. These attributes enable applications, e.g., for obstacle detection in automated driving scenarios. Object clustering and fitting provides the identification based on 3D points and specific models.
Point cloud from SfM example [youtube]

Semantic Scene Reconstruction
- Machine learning techniques: powerful real-time performance for dedicated tasks, such as object detection, classification, and segmentation
- Deep learning: large scale optimization for reference results
- Data set generation: developing tools for automatic augmentation and semi-automatic ground truth generation.
Vehicle Lane Merge Visual Benchmark

Resource Adaptive Scene Analysis and Reconstruction
Some of these techniques were developed as part of the RaSar research project.