BACK TO INDEX

Publications by Eduardo D. Sontag in year 1997
Articles in journal or book chapters
  1. P. Koiran and E.D. Sontag. Vapnik-Chervonenkis dimension of recurrent neural networks. In Computational learning theory (Jerusalem, 1997), volume 1208 of Lecture Notes in Comput. Sci., pages 223-237. Springer-Verlag, London, UK, 1997. Keyword(s): machine learning, neural networks, VC dimension, recurrent neural networks.


  2. Y.S. Ledyaev and E.D. Sontag. A notion of discontinuous feedback. In Control using logic-based switching (Block Island, RI, 1995), volume 222 of Lecture Notes in Control and Inform. Sci., pages 97-103. Springer, London, 1997.


  3. E.D. Sontag. Recurrent neural networks: Some systems-theoretic aspects. In M. Karny, K. Warwick, and V. Kurkova, editors, Dealing with Complexity: a Neural Network Approach, pages 1-12. Springer-Verlag, London, 1997. [PDF] Keyword(s): machine learning, neural networks, recurrent neural networks, learning, VC dimension.
    Abstract:
    This paper provides an exposition of some recent results regarding system-theoretic aspects of continuous-time recurrent (dynamic) neural networks with sigmoidal activation functions. The class of systems is introduced and discussed, and a result is cited regarding their universal approximation properties. Known characterizations of controllability, observability, and parameter identifiability are reviewed, as well as a result on minimality. Facts regarding the computational power of recurrent nets are also mentioned.


  4. F. H. Clarke, Y.S. Ledyaev, E.D. Sontag, and A.I. Subbotin. Asymptotic controllability implies feedback stabilization. IEEE Trans. Automat. Control, 42(10):1394-1407, 1997. [PDF]
    Abstract:
    It is shown that every asymptotically controllable system can be stabilized by means of some (discontinuous) feedback law. One of the contributions of the paper is in defining precisely the meaning of stabilization when the feedback rule is not continuous. The main ingredients in our construction are: (a) the notion of control-Lyapunov function, (b) methods of nonsmooth analysis, and (c) techniques from positional differential games.


  5. M. J. Donahue, L. Gurvits, C. Darken, and E.D. Sontag. Rates of convex approximation in non-Hilbert spaces. Constr. Approx., 13(2):187-220, 1997. [PDF] Keyword(s): machine learning, neural networks, optimization, approximation theory.
    Abstract:
    This paper deals with sparse approximations by means of convex combinations of elements from a predetermined "basis" subset S of a function space. Specifically, the focus is on the rate at which the lowest achievable error can be reduced as larger subsets of S are allowed when constructing an approximant. The new results extend those given for Hilbert spaces by Jones and Barron, including in particular a computationally attractive incremental approximation scheme. Bounds are derived for broad classes of Banach spaces. The techniques used borrow from results regarding moduli of smoothness in functional analysis as well as from the theory of stochastic processes on function spaces.


  6. P. Koiran and E.D. Sontag. Neural networks with quadratic VC dimension. J. Comput. System Sci., 54(1, part 2):190-198, 1997. Note: (1st Annual Dagstuhl Seminar on Neural Computing, 1994). [PDF] [doi:http://dx.doi.org/10.1006/jcss.1997.1479] Keyword(s): machine learning, neural networks, VC dimension.
    Abstract:
    This paper shows that neural networks which use continuous activation functions have VC dimension at least as large as the square of the number of weights w. This result settles the open question of whether whether the well-known O(w log w) bound, known for hard-threshold nets, also held for more general sigmoidal nets. Implications for the number of samples needed for valid generalization are discussed.


  7. R. Koplon and E.D. Sontag. Using Fourier-neural recurrent networks to fit sequential input/output data. Neurocomputing, 15:225-248, 1997. [PDF] Keyword(s): machine learning, neural networks, recurrent neural networks.
    Abstract:
    This paper suggests the use of Fourier-type activation functions in fully recurrent neural networks. The main theoretical advantage is that, in principle, the problem of recovering internal coefficients from input/output data is solvable in closed form.


  8. E.D. Sontag. Shattering all sets of k points in `general position' requires (k-1)/2 parameters. Neural Comput., 9(2):337-348, 1997. [PDF] Keyword(s): machine learning, neural networks, VC dimension, real-analytic functions.
    Abstract:
    For classes of concepts defined by certain classes of analytic functions depending on k parameters, there are nonempty open sets of samples of length 2k+2 which cannot be shattered. A slighly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed.


  9. E.D. Sontag and H.J. Sussmann. Complete controllability of continuous-time recurrent neural networks. Systems Control Lett., 30(4):177-183, 1997. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(97)00002-9] Keyword(s): machine learning, neural networks, recurrent neural networks.
    Abstract:
    This paper presents a characterization of controllability for the class of control systems commonly called (continuous-time) recurrent neural networks. The characterization involves a simple condition on the input matrix, and is proved when the activation function is the hyperbolic tangent.


  10. E.D. Sontag and Y. Wang. Output-to-state stability and detectability of nonlinear systems. Systems Control Lett., 29(5):279-290, 1997. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(97)90013-X] Keyword(s): input to state stability, integral input to state stability, iISS, ISS, detectability, output to state stability, detectability, input to state stability.
    Abstract:
    The notion of input-to-state stability (ISS) has proved to be useful in nonlinear systems analysis. This paper discusses a dual notion, output-to-state stability (OSS). A characterization is provided in terms of a dissipation inequality involving storage (Lyapunov) functions. Combining ISS and OSS there results the notion of input/output-to-state stability (IOSS), which is also studied and related to the notion of detectability, the existence of observers, and output injection.


  11. Y. Yang, E.D. Sontag, and H.J. Sussmann. Global stabilization of linear discrete-time systems with bounded feedback. Systems Control Lett., 30(5):273-281, 1997. [PDF] [doi:http://dx.doi.org/10.1016/S0167-6911(97)00021-2] Keyword(s): discrete-time, saturation, bounded inputs.
    Abstract:
    This paper deals with the problem of global stabilization of linear discrete time systems by means of bounded feedback laws. The main result proved is an analog of one proved for the continuous time case by the authors, and shows that such stabilization is possible if and only if the system is stabilizable with arbitrary controls and the transition matrix has spectral radius less or equal to one. The proof provides in principle an algorithm for the construction of such feedback laws, which can be implemented either as cascades or as parallel connections (``single hidden layer neural networks'') of simple saturation functions.


Conference articles
  1. F. Albertini and E.D. Sontag. Control-Lyapunov functions for time-varying set stabilization. In Proc. European Control Conf., Brussels, July 1997, 1997. Note: (Paper WE-E A5, CD-ROM file ECC515.pdf, 6 pages). Keyword(s): control-Lyapunov functions.


  2. Y.S. Ledyaev and E.D. Sontag. A remark on robust stabilization of general asymptotically controllable systems. In Proc. Conf. on Information Sciences and Systems (CISS 97), Johns Hopkins, Baltimore, MD, March 1997, pages 246-251, 1997. [PDF]
    Abstract:
    We showned in another recent paper that any asymptotically controllable system can be stabilized by means of a certain type of discontinuous feedback. The feedback laws constructed in that work are robust with respect to actuator errors as well as to perturbations of the system dynamics. A drawback, however, is that they may be highly sensitive to errors in the measurement of the state vector. This paper addresses this shortcoming, and shows how to design a dynamic hybrid stabilizing controller which, while preserving robustness to external perturbations and actuator error, is also robust with respect to measurement error. This new design relies upon a controller which incorporates an internal model of the system driven by the previously constructed feedback.


  3. E.D. Sontag. Some learning and systems-theoretic questions regarding recurrent neural networks. In Proc. Conf. on Information Sciences and Systems (CISS 97), Johns Hopkins, Baltimore, MD, March 1997, pages 630-635, 1997. Keyword(s): machine learning, neural networks, VC dimension, recurrent neural networks.


  4. E.D. Sontag and Y. Wang. A notion of input to output stability. In Proc. European Control Conf., Brussels, July 1997, 1997. Note: (Paper WE-E A2, CD-ROM file ECC958.pdf, 6 pages). [PDF] Keyword(s): input to state stability, ISS, input to output stability, input to state stability.
    Abstract:
    This paper deals with a notion of "input to output stability (IOS)", which formalizes the idea that outputs depend in an "aymptotically stable" manner on inputs, while internal signals remain bounded. When the output equals the complete state, one recovers the property of input to state stability (ISS). When there are no inputs, one has a generalization of the classical concept of partial stability. The main results provide Lyapunov-function characterizations of IOS.



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.




Last modified: Fri Nov 15 15:28:35 2024
Author: sontag.


This document was translated from BibTEX by bibtex2html