From Multimodal Sensor Data to Automated Driving via Foundation Models (engl.)
Processing of multimodal sensor data is the central ingredient of automotive industry and in particular automated vehicles. Classical approaches followed various sensor fusion techniques which lacked the capability of easy generalization and extension to newer sensor models or new application fields. The emergence of ‘Foundation Models’ enabled processing of multimodal sensor data through generative models which are easily generalizable to a wide range of detection, prediction, perception, and decision making applications. This course describes multimodal sensor data used in the automotive industry and their processing through foundation models for various perception tasks in automated driving. The course also dives into the potentials of the foundation models by explaining their architectures, objective functions, and training strategies.
Requirements: —
Credit Points (ECTS): —
Dr.-Ing. Faezeh Fallah
Faezeh Fallah obtained her bachelor of science degree in electrical engineering with a specialization on telecommunication engineering in 2006. From 2006 up to 2011 she has worked as a designer of radio frequency heads of commercial telecommunication systems based on DVB standards. In 2011–2014 she finished her master of science degree on information technology at the university of Stuttgart and in 2014–2017 she pursued her PhD (Dr.-Ing.) degree in the faculty of electrical engineering and computer science of the university of Stuttgart in the area of artificial intelligence and processing of magnetic resonance images. Since 2017, she has been a research engineer developing algorithms based on artificial intelligence for processing and synthesis of sensor data in the automotive industry.
View profile