You'll join our core 3D AI team: researchers and engineers working across the entire 3D stack: from raw data capture and geometric reconstruction to neural rendering and generative material synthesis. The team bridges academic rigor and production reality, moving fast without losing depth. Collaboration is close, hierarchy is flat, and the problems are genuinely hard.
What This Role Could Look Like
This is an open call for talented people working in 3D; whether your background is in computer vision, rendering, or machine learning. We're not hiring for a fixed spec; we're looking for people who are excellent in at least one of these areas and curious about the others.
Depending on your background and interests, you could be working on:
3D Computer Vision & Reconstruction
Building the geometric backbone of our scanning pipeline: multi-camera calibration, feature-based reconstruction, structure-from-motion, and high-precision mesh optimization. Integrating learned components where they outperform classical methods.
Inverse Rendering
Developing learning-based systems that recover geometry and physically-based materials from captured imagery—NeRF variants, Gaussian splatting, differentiable rendering, material decomposition, and relighting pipelines.
Neural Rendering & Material Systems
Designing BRDF models and material representations, building diffusion-based rendering pipelines, and developing intrinsic decomposition models that separate albedo, shading, and material properties from real-world captures.
Fields of Work
- Multi-camera calibration, SfM, bundle adjustment, mesh processing
- Inverse rendering: material and geometry estimation from images
- Differentiable rendering and appearance decomposition
- Generative 3D reconstruction: diffusion-based approaches, feed-forward networks
- BRDF modeling and physically-based material synthesis
- Integration of classical and learned components into production pipelines
Qualifications
- Master's or PhD in Computer Science, Computer Vision, Computer Graphics, Robotics, or a related field
- 3+ years of hands-on experience in at least one of the areas described above
- Solid understanding of either multi-view geometry, physically-based rendering, or generative models applied to 3D (diffusion, latent representations)
- Ability to move between research and production, prototyping fast and shipping clean code
- Strong programming skills in Python and/or C++
Nice to Have
- Experience with differentiable rendering frameworks (Mitsuba 3, nvdiffrast, PyTorch3D)
- CUDA or GPU-accelerated geometry/rendering pipelines
- Familiarity with structured light systems, photogrammetry, or appearance capture
- Published research or open-source contributions in 3D vision, graphics, or ML
- Experience with USD/MaterialX, Unreal, or real-time rendering pipelines
Benefits
- Competitive salary and stock options
- Visa sponsorship and relocation support
- Flexible working hours and remote work policy
- Participation in SIGGRAPH, CVPR, and global AI conferences
- Lunch vouchers, coffee bar and gym access (Durst AG campus)
- A research environment with real production impact and world-class clients