Publications of Eduardo D. Sontag jointly with M. Siami |
Articles in journal or book chapters |
Numerical ``direct'' approaches to time-optimal control often fail to find solutions that are singular in the sense of the Pontryagin Maximum Principle. These approaches behave better when searching for saturated (bang-bang) solutions. In previous work by one of the authors, singular solutions were theoretically shown to exist for the time-optimal problem for two-link manipulators under hard torque constraints. The theoretical results gave explicit formulas, based on Lie theory, for singular segments of trajectories, but the global structure of solutions remains unknown. In this work, we show how to effectively combine these theoretically found formulas with the use of general-purpose optimal control softwares. By using the explicit formula given by theory in the intervals where the numerical solution enters a singular arcs, we not only obtain an algebraic expression for the control in that interval, but we are also able to remove artifacts present in the numerical solution. In this way, the best features of numerical algorithms and theory complement each other and provide a better picture of the global optimal structure. We showcase the technique on a 2 degrees of freedom robotic arm example, and also propose a way of extending the analyzed method to robotic arms with higher degrees of freedom through partial feedback linearization, assuming the desired task can be mostly performed by a few of the degrees of freedom of the robot and imposing some prespecified trajectory on the remaining joints. |
We develop some basic principles for the design and robustness analysis of a continuous-time bilinear dynamical network, where an attacker can manipulate the strength of the interconnections/edges between some of the agents/nodes. We formulate the edge protection optimization problem of picking a limited number of attack-free edges and minimizing the impact of the attack over the bilinear dynamical network. In particular, the H2-norm of bilinear systems is known to capture robustness and performance properties analogous to its linear counterpart and provides valuable insights for identifying which edges arem ost sensitive to attacks. The exact optimization problem is combinatorial in the number of edges, and brute-force approaches show poor scalability. However, we show that the H2-norm as a cost function is supermodular and, therefore, allows for efficient greedy approximations of the optimal solution. We illustrate and compare the effectiveness of our theoretical findings via numerical simulation |
Motivated by the growing use of Artificial Intelligence (AI) tools in control design, this paper takes the first steps towards bridging the gap between results from Direct Gradient methods for the Linear Quadratic Regulator (LQR), and neural networks. More specifically, it looks into the case where one wants to find a Linear Feed-Forward Neural Network (LFFNN) feedback that minimizes a LQR cost. This paper starts by computing the gradient formulas for the parameters of each layer, which are used to derive a key conservation law of the system. This conservation law is then leveraged to prove boundedness and global convergence of solutions to critical points, and invariance of the set of stabilizing networks under the training dynamics. This is followed by an analysis of the case where the LFFNN has a single hidden layer. For this case, the paper proves that the training converges not only to critical points but to the optimal feedback control law for all but a set of measure-zero of the initializations. These theoretical results are followed by an extensive analysis of a simple version of the problem (the ``vector case''), proving the theoretical properties of accelerated convergence and robustness for this simpler example. Finally, the paper presents numerical evidence of faster convergence of the training of general LFFNNs when compared to traditional direct gradient methods, showing that the acceleration of the solution is observable even when the gradient is not explicitly computed but estimated from evaluations of the cost function. |
Conference articles |
Motivated by the current interest in using Artificial intelligence (AI) tools in control design, this paper takes the first steps towards bridging results from gradient methods for solving the LQR control problem, and neural networks. More specifically, it looks into the case where one wants to find a Linear Feed-Forward Neural Network (LFFNN) that minimizes the Linear Quadratic Regulator (LQR) cost. This work develops gradient formulas that can be used to implement the training of LFFNNs to solve the LQR problem, and derives an important conservation law of the system. This conservation law is then leveraged to prove global convergence of solutions and invariance of the set of stabilizing networks under the training dynamics. These theoretical results are then followed by and extensive analysis of the simplest version of the problem (the ``scalar case'') and by numerical evidence of faster convergence of the training of general LFFNNs when compared to traditional direct gradient methods. These results not only serve as indication of the theoretical value of studying such a problem, but also of the practical value of LFFNNs as design tools for data-driven control applications. |
Recent research in neural networks and machine learning suggests that using many more parameters than strictly required by the initial complexity of a regression problem can result in more accurate or faster-converging models -- contrary to classical statistical belief. This phenomenon, sometimes known as ``benign overfitting'', raises questions regarding in what other ways might overparameterization affect the properties of a learning problem. In this work, we investigate the effects of overfitting on the robustness of gradient-descent training when subject to uncertainty on the gradient estimation. This uncertainty arises naturally if the gradient is estimated from noisy data or directly measured. Our object of study is a linear neural network with a single, arbitrarily wide, hidden layer and an arbitrary number of inputs and outputs. In this paper we solve the problem for the case where the input and output of our neural-network are one-dimensional, deriving sufficient conditions for robustness of our system based on necessary and sufficient conditions for convergence in the undisturbed case. We then show that the general overparametrized formulation introduces a set of spurious equilibria which lay outside the set where the loss function is minimized, and discuss directions of future work that might extend our current results for more general formulations. |
In this paper, we investigate the problem of finding a sparse sensor and actuator (S/A) schedule that minimizes the approximation error between the input-output behavior of a fully sensed/actuated bilinear system and the system with the scheduling. The quality of this approximation is measuredby an H2-like metric, which is defined for a bilinear (time-varying) system with S/A scheduling based on the discrete Laplace transform of its Volterra kernels. First, we discuss the difficulties of designing S/A schedules for bilinear systems, which prevented us from finding a polynomial time algorithmfor solving the problem. We then propose a polynomial-time S/A scheduling heuristic that selects a fraction of sensors and node actuators at each time step while maintaining a small approximation error between the input-output behavior of thefully sensed/actuated system and the one with S/A scheduling in this H2-based sense. Numerical experiments illustrate the good approximation quality of our proposed methods. |
In large-scale networks, agents and links are often vulnerable to attacks. This paper focuses on continuous-time bilinear networks, where additive disturbances model attacks or uncertainties on agents/states (node disturbances), and multiplicative disturbances model attacks or uncertainties on couplings between agents/states (link disturbances). It investigates network robustness notion in terms of the underlying digraph of the network, and structure of exogenous uncertainties and attacks. Specifically, it defines a robustness measure using the $\mathcal H_2$-norm of the network and calculates it in terms of the reachability Gramian of the bilinear system. The main result is that under certain conditions, the measure is supermodular over the set of all possible attacked links. The supermodular property facilitates the efficient solution finding of the optimization problem. Examples illustrate how different structures can make the system more or less vulnerable to malicious attacks on links. |
When measuring importance of nodes in a network, the interconnections and dynamics are often supposed to be perfectly known. In this paper, we consider networks of agents with both uncertain couplings and dynamics. Network uncertainty is modeled by structured additive stochastic disturbances on each agent's update dynamics and coupling weights. We then study how these uncertainties change the network's centralities. Disturbances on the couplings between agents resul in bilinear dynamics, and classical centrality indices from linear network theory need to be redefined. To do that, we first show that, similarly to its linear counterpart, the squared H2 norm of bilinear systems measures the trace of the steady-state error covariance matrix subject to stochastic disturbances. This makes the H2 norm a natural candidate for a performance metric of the system. We propose a centrality index for the agents based on the H2 norm, and show how it depends on the network topology and the noise structure. Finally, we simulate a few graphs to illustrate how uncertainties on different couplings affect the agents' centrality rankings compared to a linearized model of the same system. |
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.
This document was translated from BibTEX by bibtex2html