To come up with this list, we have used Google Scholar. In there we have done some extensive search and found the machine learning most cited papers. Usually the more a paper is cited the more impact and importance it has. What we see in this blog post is a list of the 21 most cited papers in Control Theory. For each paper, we include its authors, number of citations, publication year and location, as well as a summary. We have also ploted the citation trend for each paper that shows if the paper popularity has grown or declined over time.

1. A NEW APPROACH TO LINEAR FILTERING AND PREDICTION PROBLEMS

Authors: Rudolph Emil Kalman
Published in: Transactions of the ASME, Journal of Basic Engineering, 1960
Number of citations: 41,381
Summary: The classical filtering and prediction problem is re-examined using the Bode-Shannon representation of random processes and the “state-transition” method of analysis of dynamic systems. New results are: (1) The formulation and methods of solution of the problem apply without modification to stationary and nonstationary statistics and to growing-memory and infinite-memory filters. (2) A nonlinear difference (or differential) equation is derived for the covariance matrix of the optimal estimation error. From the solution of this equation the co-efficients of the difference (or differential) equation of the optimal linear filter are obtained without further calculations. (3) The filtering problem is shown to be the dual of the noise-free regulator problem. The new method developed here is applied to two well-known problems, confirming and extending earlier results. The discussion is largely self-contained and proceeds from first…



2. LINEAR MATRIX INEQUALITIES IN SYSTEM AND CONTROL THEORY

Authors: S Boyd, L El Ghaoui, E Feron, V Balakrishnan
Published in: Philadelphia, USA: SIAM, 1994
Number of citations: 27,481
Summary: The basic topic of this book is solving problems from system and control theory using convex optimization. We show that a wide variety of problems arising in system and control theory can be reduced to a handful of standard convex and quasiconvex optimization problems that involve matrix inequalities. For a few special cases there are “analytic solutions” to these problems, but our main point is that they can be solved numerically in all cases. These standard problems can be solved in polynomial-time (by, eg, the ellipsoid algorithm of Shor, Nemirovskii, and Yudin), and so are tractable, at least in a theoretical sense. Recently developed interior-point methods for these standard problems have been found to be extremely efficient in practice. Therefore, we consider the original problems from system and control theory as solved. This book is primarily intended for the researcher in system and control theory, but can…



3. CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING

Authors: Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra
Published in: arXiv preprint arXiv:1509.02971, 2015
Number of citations: 11,228
Summary: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.



4. YALMIP: A TOOLBOX FOR MODELING AND OPTIMIZATION IN MATLAB

Authors: Johan Lofberg
Published in: Computer Aided Control Systems Design, 2004 IEEE International Symposium on, 2004
Number of citations: 10,925
Summary: The MATLAB toolbox YALMIP is introduced. It is described how YALMIP can be used to model and solve optimization problems typically occurring in systems and control theory. In this paper, free MATLAB toolbox YALMIP, developed initially to model SDPs and solve these by interfacing eternal solvers. The toolbox makes development of optimization problems in general, and control oriented SDP problems in particular, extremely simple. In fact, learning 3 YALMIP commands is enough for most users to model and solve the optimization problems



5. NEW RESULTS IN LINEAR FILTERING AND PREDICTION THEORY

Authors: Rudolph E Kalman, Richard S Bucy
Published in: ASME, Journal of Basic Engineering, 1961
Number of citations: 8,934
Summary: A nonlinear differential equation of the Riccati type is derived for the covariance matrix of the optimal filtering error. The solution of this “variance equation” completely specifies the optimal filter for either finite or infinite smoothing intervals and stationary or nonstationary statistics. The variance equation is closely related to the Hamiltonian (canonical) differential equations of the calculus of variations. Analytic solutions are available in some cases. The significance of the variance equation is illustrated by examples which duplicate, simplify, or extend earlier results in this field. The Duality Principle relating stochastic estimation and deterministic control problems plays an important role in the proof of theoretical results. In several examples, the estimation problem and its dual are discussed side-by-side. Properties of the variance equation are of great interest in the theory of adaptive systems. Some aspects of this are…



6. STATE-SPACE SOLUTIONS TO STANDARD H2 AND H CONTROL PROBLEMS

Authors: John Doyle, Keith Glover, Pramod Khargonekar, Bruce Francis
Published in: 1988 American Control Conference, 1988
Number of citations: 8,764
Summary: Simple state-space formulas are presented for a controller solving a standard H-problem. The controller has the same state-dimension as the plant, its computation involves only two Riccati equations, and it has a separation structure reminiscent of classical LQG (i.e., H2 ) theory. This paper is also intended to be of tutorial value, so a standard H2 -solution is developed in parallel.



7. VARIABLE STRUCTURE SYSTEMS WITH SLIDING MODES

Authors: Vadim Utkin
Published in: IEEE Transactions on Automatic control, 1977
Number of citations: 6,916
Summary: Variable structure systems consist of a set of continuous subsystems together with suitable switching logic. Advantageous properties result from changing structures according to this switching logic. Design and analysis for this class of systems are surveyed in this paper.



8. GENERALIZED PREDICTIVE CONTROLXE2X80X94PART I. THE BASIC ALGORITHM

Authors: David W Clarke, Coorous Mohtadi, P Simon Tuffs
Published in: Automatica, 1987
Number of citations: 6,202
Summary: Current self-tuning algorithms lack robustness to prior choices of either dead-time or model order. A novel method—generalized predictive control or GPC—is developed which is shown by simulation studies to be superior to accepted techniques such as generalized minimum-variance and pole-placement. This receding-horizon method depends on predicting the plant’s output over several steps based on assumptions about future control actions. One assumption—that there is a “control horizon” beyond which all control increments become zero—is shown to be beneficial both in terms of robustness and for providing simplified calculations. Choosing particular values of the output and control horizons produces as subsets of the method various useful algorithms such as GMV, EPSAC, Peterka’s predictive controller (1984, Automatica, 20, 39—50) and Ydstie’s extended-horizon design (1984, IFAC 9th World Congress…



9. A SURVEY OF RECENT RESULTS IN NETWORKED CONTROL SYSTEMS

Authors: Joo P Hespanha, Payam Naghshtabrizi, Yonggang Xu
Published in: Proceedings of the IEEE, 2007
Number of citations: 4,322
Summary: Networked control systems (NCSs) are spatially distributed systems for which the communication between sensors, actuators, and controllers is supported by a shared communication network. We review several recent results on estimation, analysis, and controller synthesis for NCSs. The results surveyed address channel limitations in terms of packet-rates, sampling, network delay, and packet dropouts. The results are presented in a tutorial fashion, comparing alternative methodologies.



10. FLATNESS AND DEFECT OF NON-LINEAR SYSTEMS: INTRODUCTORY THEORY AND EXAMPLES

Authors: Michel Fliess, Jean Lévine, Philippe Martin, Pierre Rouchon
Published in: International journal of control, 1995
Number of citations: 3,816
Summary: We introduce flat systems, which are equivalent to linear ones via a special type of feedback called endogenous. Their physical properties are subsumed by a linearizing output and they might be regarded as providing another nonlinear extension of Kalman’s controllability. The distance to flatness is measured by a non-negative integer, the defect. We utilize differential algebra where flatness- and defect are best defined without distinguishing between input, state, output and other variables. Many realistic classes of examples are flat. We treat two popular ones: the crane and the car with n trailers, the motion planning of which is obtained via elementary properties of plane curves. The three non-flat examples, the simple, double and variable length pendulums, are borrowed from non-linear physics. A high frequency control strategy is proposed such that the averaged systems become flat.



11. MUJOCO: A PHYSICS ENGINE FOR MODEL-BASED CONTROL

Authors: Emanuel Todorov, Tom Erez, Yuval Tassa
Published in: 2012 IEEE/RSJ international conference on intelligent robots and systems, 2012
Number of citations: 3,789
Summary: We describe a new physics engine tailored to model-based control. Multi-joint dynamics are represented in generalized coordinates and computed via recursive algorithms. Contact responses are computed via efficient new algorithms we have developed, based on the modern velocity-stepping approach which avoids the difficulties with spring-dampers. Models are specified using either a high-level C++ API or an intuitive XML file format. A built-in compiler transforms the user model into an optimized data structure used for runtime computation. The engine can compute both forward and inverse dynamics. The latter are well-defined even in the presence of contacts and equality constraints. The model can include tendon wrapping as well as actuator activation states (e.g. pneumatic cylinders or muscles). To facilitate optimal control applications and in particular sampling and finite differencing, the dynamics can be…



12. CONTRIBUTIONS TO THE THEORY OF OPTIMAL CONTROL

Authors: Rudolf Emil Kalman
Published in: Bol. soc. mat. mexicana, 1960
Number of citations: 3,421
Summary: THIS is one of the two ground-breaking papers by Kalman that appeared in 1960—with the other one (discussed next) being the filtering and prediction paper. This first paper, which deals with linear-quadratic feedback control, set the stage for what came to be known as LQR (Linear-Quadratic-Regulator) control, while the combination of the two papers formed the basis for LQG (Linear-Quadratic-Gaussian) control. Both LQR and LQG control had major influence on researchers, teachers, and practitioners of control in the decades that followed. The idea of designing a feedback controller such that the integral of the square of tracking error is minimized was first proposed by Wiener and Hall, and further developed in the influential book by Newton, Gould and Kaiser. However, the problem formulation in this book remained unsatisfactory from a mathematical point of view, but, more importantly, the algorithms obtained allowed application only to rather low order systems and were thus of limited value. This is not surprising since it basically took until the H2-interpretation in the 1980s of LQG control before a satisfactory formulation of least squares feedback control design was obtained. Kalman’s formulation in terms of finding the least squares control that evolves from an arbitrary initial state is a precise formulation of the optimal least squares transient control problem. The paper introduced the very important notion of controllability, as the possibility of transfering any initial state to zero by a suitable control action. It includes the necessary and sufficient condition for controllability in terms of the positive definiteness of the Controllability…



13. OPTIMAL FEEDBACK CONTROL AS A THEORY OF MOTOR COORDINATION

Authors: Emanuel Todorov, Michael I Jordan
Published in: Nature neuroscience, 2002
Number of citations: 3,217
Summary: A central problem in motor control is understanding how the many biomechanical degrees of freedom are coordinated to achieve a common goal. An especially puzzling aspect of coordination is that behavioral goals are achieved reliably and repeatedly with movements rarely reproducible in their detail. Existing theoretical frameworks emphasize either goal achievement or the richness of motor variability, but fail to reconcile the two. Here we propose an alternative theory based on stochastic optimal feedback control. We show that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions. This strategy does not enforce a desired trajectory, but uses feedback more intelligently, correcting only those deviations that interfere with task goals. From this framework, task-constrained variability, goal-directed corrections, motor synergies, controlled parameters, simplifying…



14. CONTROLLABILITY OF COMPLEX NETWORKS

Authors: Yang-Yu Liu, Jean-Jacques Slotine, Albert-La´szlo´ Baraba´si
Published in: nature, 2011
Number of citations: 3,206
Summary: The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. Although control theory offers mathematical tools for steering engineered and natural systems towards a desired state, a framework to control complex self-organized systems is lacking. Here we develop analytical tools to study the controllability of an arbitrary complex directed network, identifying the set of driver nodes with time-dependent control that can guide the system’s entire dynamics. We apply these tools to several real networks, finding that the number of driver nodes is determined mainly by the network’s degree distribution. We show that sparse inhomogeneous networks, which emerge in many real complex systems, are the most difficult to control, but that dense and homogeneous networks can be controlled using a few driver nodes. Counterintuitively, we find that in both model and real…



15. MULTIVARIABLE FEEDBACK DESIGN: CONCEPTS FOR A CLASSICAL/MODERN SYNTHESIS

Authors: John Doyle, Gunter Stein
Published in: IEEE transactions on Automatic Control, 1981
Number of citations: 3,169
Summary: This paper presents a practical design perspective on multivariable feedback control problems. It reviews the basic issue-feedback design in the face of uncertainties-and generalizes known single-input, single-output (SISO) statements and constraints of the design problem to multiinput, multioutput (MIMO) cases. Two major MIMO design approaches are then evaluated in the context of these results.



16. HYBRID DYNAMICAL SYSTEMS

Authors: Rafal Goebel, Ricardo G Sanfelice, Andrew R Teel
Published in: IEEE Control Systems Magazine, 2009
Number of citations: 3,089
Summary: Robust stability and control for systems that combine continuous-time and discrete-time dynamics. This article is a tutorial on modeling the dynamics of hybrid systems, on the elements of stability theory for hybrid systems, and on the basics of hybrid control. The presentation and selection of material is oriented toward the analysis of asymptotic stability in hybrid systems and the design of stabilizing hybrid controllers. Our emphasis on the robustness of asymptotic stability to data perturbation, external disturbances, and measurement error distinguishes the approach taken here from other approaches to hybrid systems. While we make some connections to alternative approaches, this article does not aspire to be a survey of the hybrid system literature, which is vast and multifaceted.



17. THE INTERNAL MODEL PRINCIPLE OF CONTROL THEORY

Authors: Bruce A Francis, Walter Murray Wonham
Published in: Automatica, 1976
Number of citations: 3,041
Summary: The classical regulator problem is posed in the context of linear, time-invariant, finite-dimensional systems with deterministic disturbance and reference signals. Control action is generated by a compensator which is required to provide closed loop stability and output regulation in the face of small variations in certain system parameters. It is shown, using the geometric approach, that such a structurally stable synthesis must utilize feedback of the regulated variable, and incorporate in the feedback path a suitably reduplicated model of the dynamic structure of the disturbance and reference signals. The necessity of this control structure constitutes the Internal Model Principle. It is shown that, in the frequency domain, the purpose of the internal model is to supply closed loop transmission zeros which cancel the unstable poles of the disturbance and reference signals. Finally, the Internal Model Principle is extended to…



18. ANALYSIS OF FEEDBACK SYSTEMS WITH STRUCTURED UNCERTAINTIES

Authors: John Doyle
Published in: Control Theory and Applications, IEE Proceedings D, 1982
Number of citations: 2,884
Summary: The paper introduces a general approach for analysing linear systems with structured uncertainty based on a new generalised spectral theory for matrices. The results of the paper naturally extend techniques based on singular values and eliminate their most serious difficulties.



19. BILATERAL CONTROL OF TELEOPERATORS WITH TIME DELAY

Authors: Robert J Anderson, Mark W Spong
Published in: Proceedings of the 1988 IEEE International Conference on Systems, Man, and Cybernetics, 1988
Number of citations: 2,856
Summary: When a robot is operated remotely by use of a teleoperator, it is desirable to communicate contact force information from the slave to the master, in order to kinesthetically couple the operator to the environment and increase the sense of telepresence. One problem, however, recognized as early as 1965, has remained unsolved until now: How to maintain stability in a force-reflecting bilateral teleoperator in the presence of substantial time delay? In this paper, we present a solution to this problem.



20. A CONTROL ENGINEER’S GUIDE TO SLIDING MODE CONTROL

Authors: K David Young, Vadim I Utkin, Umit Ozguner
Published in: IEEE transactions on control systems technology, 1999
Number of citations: 2,769
Summary: Presents a guide to sliding mode control for practicing control engineers. It offers an accurate assessment of the so-called chattering phenomenon, catalogs implementable sliding mode control design solutions, and provides a frame of reference for future sliding mode control research.



21. KALMAN FILTERING WITH INTERMITTENT OBSERVATIONS

Authors: Bruno Sinopoli, Luca Schenato, Massimo Franceschetti, Kameshwar Poolla, Michael I Jordan, Shankar S Sastry
Published in: IEEE transactions on Automatic Control, 2004
Number of citations: 2,758
Summary: Motivated by navigation and tracking applications within sensor networks, we consider the problem of performing Kalman filtering with intermittent observations. When data travel along unreliable communication channels in a large, wireless, multihop sensor network, the effect of communication delays and loss of information in the control loop cannot be neglected. We address this problem starting from the discrete Kalman filtering formulation, and modeling the arrival of the observation as a random process. We study the statistical convergence properties of the estimation error covariance, showing the existence of a critical value for the arrival rate of the observations, beyond which a transition to an unbounded state error covariance occurs. We also give upper and lower bounds on this expected state error covariance.