Abstract:
The majority of industrial processes display intrinsic nonlinear features. Therefore, conventional control procedures that rely on linearized structures are inadequate fo...Show MoreMetadata
Abstract:
The majority of industrial processes display intrinsic nonlinear features. Therefore, conventional control procedures that rely on linearized structures are inadequate for obtaining optimal control. This research presents an approach that utilizes an artificial neural network (ANN) and reinforcement learning (RL) to regulate a nonlinear dynamic pressure drop. Regulation in systems involving the passage of many phases ANN exhibits proficiency in generalization, disturbances, and function approximations. This control technique utilizes the talents of artificial neural networks (ANN) in combination with the decision-making capabilities of reinforcement learning (RL) methodology. The study introduces two distinct machine-learning methodologies. Initially, a Hammerstein identification technique is employed to determine the mathematical representation of the multi-phase system based on the gathered experimental data. Actor-critical learning is employed to adjust the PID parameters in an adaptive manner, using the model-free and online learning capabilities of reinforcement learning. The simulation results demonstrate that the suggested controller is highly effective for complicated nonlinear systems, displaying exceptional adaptability and robustness. This surpasses the performance of a typical PID controller.
Date of Conference: 22-25 April 2024
Date Added to IEEE Xplore: 12 June 2024
ISBN Information: