Publications of Eduardo D. Sontag jointly with H.J. Sussmann |
Articles in journal or book chapters |
This paper presents a characterization of controllability for the class of control systems commonly called (continuous-time) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent. |
This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections (``single hidden layer neural networks'') of simple saturation functions. |
Shorter and more expository version of "Nonsmooth control-Lyapunov functions" |
We present two constructions of controllers that globally stabilize linear systems subject to control saturation. We allow essentially arbitrary saturation functions. The only conditions imposed on the system are the obvious necessary ones, namely that no eigenvalues of the uncontrolled system have positive real part and that the standard stabilizability rank condition hold. One of the constructions is in terms of a "neural-network type" one-hidden layer architecture, while the other one is in terms of cascades of linear maps and saturations. |
Feedforward nets with sigmoidal activation functions are often designed by minimizing a cost criterion. It has been pointed out before that this technique may be outperformed by the classical perceptron learning rule, at least on some problems. In this paper, we show that no such pathologies can arise if the error criterion is of a threshold LMS type, i.e., is zero for values ``beyond'' the desired target values. More precisely, we show that if the data are linearly separable, and one considers nets with no hidden neurons, then an error function as above cannot have any local minima that are not global. In addition, the proof gives the following stronger result, under the stated hypotheses: the continuous gradient adjustment procedure is such that from any initial weight configuration a separating set of weights is obtained in finite time. This is a precise analogue of the Perceptron Learning Theorem. The results are then compared with the more classical pattern recognition problem of threshold LMS with linear activations, where no spurious local minima exist even for nonseparable data: here it is shown that even if using the threshold criterion, such bad local minima may occur, if the data are not separable and sigmoids are used. keywords = { neural networks , feedforward neural nets }, |
We give an example of a neural net without hidden layers and with a sigmoid transfer function, together with a training set of binary vectors, for which the sum of the squared errors, regarded as a function of the weights, has a local minimum which is not a global minimum. The example consists of a set of 125 training instances, with four weights and a threshold to be learnt. We do not know if substantially smaller binary examples exist. |
We prove that the angular velocity equations can be smoothly stabilized with a single torque controller for bodies having an axis of symmetry. This complements a recent result of Aeyels and Szafranski. |
Problems that appear in trying to extend linear control results to systems over rings R have attracted considerable attention lately. This interest has been due mainly to applications-oriented motivations (in particular, dealing with delay-differential equations), and partly to a purely algebraic interest. Given a square n-matrix F and an n-row matrix G. pole-shifting problems consist in obtaining more or less arbitrary characteristic polynomials for F+GK, for suitable ("feedback") matrices K. A review of known facts is given, various partial results are proved, and the case n=2 is studied in some detail. |
Conference articles |
It is shown that the existence of a continuous control-Lyapunov function (CLF) is necessary and sufficient for null asymptotic controllability of nonlinear finite-dimensional control systems. The CLF condition is expressed in terms of a concept of generalized derivative (upper contingent derivative). This result generalizes to the non-smooth case the theorem of Artstein relating closed-loop feedback stabilization to smooth CLF's. It relies on viability theory as well as optimal control techniques. A "non-strict" version of the results, analogous to the LaSalle Invariance Principle, is also provided. |
This paper shows the existence of (nonlinear) smooth dynamic feedback stabilizers for linear time invariant systems under input constraints, assuming only that open-loop asymptotic controllability and detectability hold. |
This paper studies time-optimal control questions for a certain class of nonlinear systems. This class includes a large number of mechanical systems, in particular, rigid robotic manipulators with torque constraints. As nonlinear systems, these systems have many properties that are false for generic systems of the same dimensions. |
We consider the problem of estimating a signal, which is known -- or assumed -- to be constant on each of the members of a partition of a square lattice into m unknown regions, from the observation of the signal plus Gaussian noise. This is a nonlinear estimation problem, for which it is not appropriate to use the conditional expectation as the estimate. We show that, at least in principle, the "maximum iikelihood estimator" (MLE) proposed by Geman and Geman lends itself to numerical computation using the annealing algorithm. We argue that the MLE by itself can be, under certain conditions (low signal to noise ratio), a very unsatisfactory estimator, in that it does worse than just deciding that the signal was zero. However, if combined with a rule which we propose, for deciding when to use and when to ignore it, the MLE can provide a reasonable suboptimal estimator. We then discuss preliminary numerical data obtained using the annealing method. These results indicate that: (a) the annealing algorithm performs remarkably well, and (b) a criterion can be formulated in terms of quantities computed from the observed image (without using a priori knowledge of the signal-to-noise ratio) for deciding when to keep the MLE. |
This note addresses the following problem: Find conditions under which a continuous-time (nonlinear) system gives rise, under constant rate sampling, to a discrete-time system which satisfies the accessibility property. |
We show that, in general, it is impossible to stabilize a controllable system by means of a continuous feedback, even if memory is allowed. No optimality considerations are involved. All state spaces are Euclidean spaces, so no obstructions arising from the state space topology are involved either. For one dimensional state and input, we prove that continuous stabilization with memory is always possible. (This is an old conference paper, never published in journal form but widely cited nonetheless. Warning: file is very large, since it was scanned.) |
Internal reports |
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.
This document was translated from BibTEX by bibtex2html