RUB » FBI » MAS » Research

Fields of Research


Intelligent Control – Reinforcement Learning

The application of Artificial Intelligence (AI) and Reinforcement Learning (RL) techniques has recorded a tremendous interest in recent years. The idea of learning and optimizing from experience could be effectively used and verified in control strategies in simulation. Yet the execution of these techniques directly in real-life applications can be challenging due to the presence of uncertainties and disturbances. Our aim is to develop and implement concepts from AI and RL to design effective control strategies and test them in actual systems in our lab. The inverted pendulum (IP) system is one of the most popular benchmark devices to test new control algorithms. An RL-based control strategy coupled with a PID controller is tested to swing up and balance the linear IP in our lab. The RL Agent is trained in simulation before applying it to the real experimental system. Current work involves online training of the Agent using different algorithms.


Description of the figures
Left: Experimental set-up with the inverted pendulum
Schematic of the control strategy where the Agent is trained in simulation and then deployed in the real system along with a PID controller to stabilize the balance.



Active Vibration Control

Active Vibration Control (AVC) as a relatively mature field of research deals with continuous mechanical/civil systems under unwanted environmental excitations. An intriguing topic is to relate the modelling tools available for these structures such as reduced order finite element method or more importantly system identification method with active and semi-active control systems. Among other topics, our research is aimed at investigation of the nonlinear behavior of structures for uncertainty quantification. In that way the linear models obtained from system identification methods in time-domain and frequency-domain can be improved. As a result, less conservative model-based control approaches can be developed that address AVC in a more efficient manner.



Robust adaptive control techniques – Sliding Mode Control

Sliding mode controller (SMC) is widely used in uncertain linear or nonlinear systems due to its robustness to uncertainties as well as to external disturbances. However, several disadvantages are related with SMC, like the chattering problem, slow response in fast variant of fault, getting the upper bound value of unknown function (disturbance, uncertainty or fault) or slow convergence. To overcome such disadvantages, we propose novel SMC by improving sliding surface, reaching law, or hybridizing with other controllers such as backstepping controller, neural network controller, PID controller to improve SMC performance and alleviate the chattering in control input.
Robotic manipulators play a crucial role in several industrial sectors and have served as an interesting benchmark in the development and evaluation of new nonlinear controllers. The main purpose of the robot controller is to reduce the trajectory error of the robot manipulator. However, the complexities of dynamics, non-linearity, uncertainty, and disturbances dramatically degrade the tracking performance of robots. Therefore, in implementing our novel sliding mode controller a robot manipulator is used as a benchmark for the evaluation and validation to of the controller effectiveness.