0

Data-Driven Iterative Learning Control for Discrete-Time Systems

eBook - Intelligent Control and Learning Systems

Erschienen am 15.11.2022, 1. Auflage 2022
185,95 €
(inkl. MwSt.)

Download

E-Book Download
Bibliografische Daten
ISBN/EAN: 9789811959509
Sprache: Englisch
Umfang: 0 S., 3.63 MB
E-Book
Format: PDF
DRM: Digitales Wasserzeichen

Beschreibung

This book belongs to the subject of control and systems theory. It studies a novel data-driven framework for the design and analysis of iterative learning control (ILC) for nonlinear discrete-time systems. A series of iterative dynamic linearization methods is discussed firstly to build a linear data mapping with respect of the systems output and input between two consecutive iterations. On this basis, this work presents a series of data-driven ILC (DDILC) approaches with rigorous analysis. After that, this work also conducts significant extensions to the cases with incomplete data information, specified point tracking, higher order law, system constraint, nonrepetitive uncertainty, and event-triggered strategy to facilitate the real applications. The readers can learn the recent progress on DDILC for complex systems in practical applications. This book is intended for academic scholars, engineers, and graduate students who are interested in learning control, adaptive control, nonlinear systems, and related fields.

Autorenportrait

Ronghu Chi received the Ph.D. degree from Beijing Jiaotong University, Beijing China, in 2007. He was Visiting Scholar with Nanyang Technological University, Singapore from 2011 to 2012 and Visiting Professor with University of Alberta, Edmonton, AB, Canada, from 2014 to 2015. In 2007, he joined Qingdao University of Science and Technology, Qingdao, China, and is currently a full professor in the School of Automation and Electronic Engineering. He served as various positions in international conferences and was an invited guest editor of International Journal of Automation and Computing. He has also served as a council member of a Shandong Institute of Automation and committee members of Data-driven Control, Learning and Optimization Professional Committee, etc. He was awarded the Taishan scholarship in 2016. His current research interests include iterative learning control, data-driven control, intelligent transportation systems, and so on. He has published over 100 papers in important international journals and conference proceedings. Yu Hui received the bachelors degree in automatic control from the Qingdao University of Science and Technology, Qingdao, China, in 2016, where he is currently pursuing the Ph.D. degree with the Institute of Artificial Intelligence and Control, School of Automation and Electronic Engineering. His research interests include data-driven control, learning control, and multi-agent systems. Zhongsheng Hou (SM13-F20) received bachelors and master's degrees from Jilin University of Technology, China, in 1983 and 1988, respectively, and Ph.D. degree from Northeastern University, China, in 1994. He was Postdoctoral Fellow with Harbin Institute of Technology, China, from 1995 to 1997 and Visiting Scholar with Yale University, CT, from 2002 to 2003. In 1997, he joined the Beijing Jiaotong University, China, where he was a distinguished professor and founding director of Advanced Control Systems Lab, and Head of Department of Automatic Control until 2018. Currently, He is Chief Professor at Qingdao University. He is also the founding director of the technical committee on Data Driven Control, Learning and Optimization (DDCLO), Chinese Association of Automation. He is IEEE Senior Member, IFAC Technical Committee Member on Adaptive and Learning Systems and Transportation Systems. His research interests consist of data-driven control, model-free adaptive control, learning control, and intelligent transportation systems.

Inhalt

Chapter 1: Introduction.- Chapter 2: Iterative Dynamic Linearization of Nonlinear Repetitive Systems.- Chapter 3: Data-Driven Optimal Iterative Learning Control.- Chapter 4: Knowledge Enhanced Data-Driven Optimal Terminal ILC.- Chapter 5: Data-Driven Optimal Point-to-Point ILC using Intermidient Information.- Chapter 6: Higher order Data-Driven Optimal Iterative Learning Control.- Chapter 7: Data-Driven Optimal Iterative Learning Control with Varying Trial Length.- Chapter 8: Data-Driven Optimal Iterative Learning Control with Package Dropouts.- Chapter 9: Constrained Data-Driven Optimal Iterative Learning Control.- Chapter 10: ESO-based Data-Driven Optimal Iterative Learning Control.- Chapter 11: Quantized Data-Driven Optimal Iterative Learning Control.- Chapter 12: Event-triggered Data-driven Optimal Iterative Learning Control.- Chapter 13: Conclusions and Perspectives.- Appendices.

Informationen zu E-Books

„E-Book“ steht für digitales Buch. Um diese Art von Büchern lesen zu können wird entweder eine spezielle Software für Computer, Tablets und Smartphones oder ein E-Book Reader benötigt. Da viele verschiedene Formate (Dateien) für E-Books existieren, gilt es dabei, einiges zu beachten.
Von uns werden digitale Bücher in drei Formaten ausgeliefert. Die Formate sind EPUB mit DRM (Digital Rights Management), EPUB ohne DRM und PDF. Bei den Formaten PDF und EPUB ohne DRM müssen Sie lediglich prüfen, ob Ihr E-Book Reader kompatibel ist. Wenn ein Format mit DRM genutzt wird, besteht zusätzlich die Notwendigkeit, dass Sie einen kostenlosen Adobe® Digital Editions Account besitzen. Wenn Sie ein E-Book, das Adobe® Digital Editions benötigt herunterladen, erhalten Sie eine ASCM-Datei, die zu Digital Editions hinzugefügt und mit Ihrem Account verknüpft werden muss. Einige E-Book Reader (zum Beispiel PocketBook Touch) unterstützen auch das direkte Eingeben der Login-Daten des Adobe Accounts – somit können diese ASCM-Dateien direkt auf das betreffende Gerät kopiert werden.
Da E-Books nur für eine begrenzte Zeit – in der Regel 6 Monate – herunterladbar sind, sollten Sie stets eine Sicherheitskopie auf einem Dauerspeicher (Festplatte, USB-Stick oder CD) vorsehen. Auch ist die Menge der Downloads auf maximal 5 begrenzt.