Lehrstuhl für Mechatronik in Maschinenbau und Fahrzeugtechnik (MEC)

Wissenschaftliche/r Mitarbeiter/in im Bereich "Hybrid model based reinforcement learning" (m/w/d)

About us

The chair of Prof. Bajcinca focuses on research of modern methods and advanced applications of control and system theory, involving three main pillars: cyber-physical systems, complex dynamical systems and machine learning. Through networking with a large number of national and international research, academic and industrial partners, funding projects with exotic and highly interesting tasks regarding model-based and data-driven control have been acquired on a regular basis. The research work is supported with an excellent laboratory equipment and high-performance computation in the areas of autonomous systems, robotics and energy systems, which is continuously being further developed.

https://www.mv.uni-kl.de/mec/home.

 

Research Framework

Hybrid model-based reinforcement learning (RL) is a type of RL that combines model-based RL with model-free RL. Hybrid model-based RL algorithms (HMRLAs)  typically learn a model of the environment and then use that model to plan the agent's actions. However, they also use model-free RL techniques to learn from experience and to adapt to changes in the environment. Hybrid model-based RL algorithms have a number of advantages over both model-based RL and model-free RL. However designing HMRLAs can be challenging since it is often difficult to find a balance between efficiency and robustness. In this regard, with the aim of increasing robustness, resiliency and trust in RL algorithms it is quite advantageous to incorporate a dynamical system formulation in the design of model based RLs. This is specifically important in case of certain applications that adhere to certain well known physical/chemical/biological laws. 

In this context, we are seeking motivated researchers to join our team in exploring the interesting field of "Hybrid model-based reinforcement learning". This research endeavor represents a critical step towards developing trustable, resilient and efficient RL algorithms that have widespread impact in various domains of applications such as robotics, autonomous driving, cancer biology, epidemiology etc.

 

Task Description

The research compiles from the following list of tasks.

  • Developing novel model based RL methods and comparing them with the state of the art. 
  • Mathematically investigate how the above developed model based RL can be made less sensitive to errors in model specification by considering more dynamic updates in the value function.
  • Derive mathematical guarantees for robustness and resiliency. Special emphasis must be provided for the case of ‘catastrophic forgetting’ during online-learning. 
  • Develop connections to stochastic optimal control problems and associated HJB equations.
  • Collaborating closely with academic researchers who are specialized in one or more areas such as stochastic control and machine learning. 
  • Apply the above developed schemes to specific problems in the domains of Biology, Processes engineering and Autonomous driving.

 

Qualification

  • Above average university degree in mathematics and control
  • Knowledge of at least one programming language: Matlab, Python, C++ is expected
  • Knowledge in dynamical systems and probability theory
  • Proficiency in English or / and German is essential
  • Highly motivated, eager to work within a team or independently.

 

We offer

  • Payment according to TV-L E13 with an initial one-year time limit
  • The possibility to do a PhD and to teach is given in case of scientific aptitude
  • TUK strongly encourages qualified female academics to apply
  • Severely disabled persons will be given preference in the case of appropriate suitability (please enclose proof)
  • Electronic application is preferred. Please attach only one coherent PDF.

You can expect an interesting, diversified and responsible task within a young, highly motivated and interdisciplinary team of a growing chair with great personal creativity freedom.

Contact

Prof. Dr.-Ing. Naim Bajcinca
Phone: +49 (0)631/205-3230
Mobile: +49 (0)172/614-8209
Fax:  +49 (0)631/205-4201
Email: mec-apps(at)mv.uni-kl.de

 

Keywords

Reinforcement learning
Stochastic control
Random dynamical systems
Markov decision processes
Deep learning

 

Application Papers

Cover Letter
CV
University Certificates
References
List of Publications

 

Application Deadline

15. April 2024
We will process your application as soon as received.

 

Job Availability

Immediate

Zum Seitenanfang