BACK TO INDEX

Publications about 'computational complexity'
Articles in journal or book chapters
  1. R. Albert, B. Dasgupta, R. Dondi, and E.D. Sontag. Inferring (biological) signal transduction networks via transitive reductions of directed graphs. Algorithmica, 51:129-159, 2008. [PDF] [doi:10.1007/s00453-007-9055-0] Keyword(s): systems biology, reaction networks, algorithms, signal transduction networks, graph algorithms.
    Abstract:
    The transitive reduction problem is that of inferring a sparsest possible biological signal transduction network consistent with a set of experimental observations, with a goal to minimize false positive inferences even if risking false negatives. This paper provides computational complexity results as well as approximation algorithms with guaranteed performance.


  2. R. Albert, B. DasGupta, R. Dondi, S. Kachalo, E.D. Sontag, A. Zelikovsky, and K. Westbrooks. A novel method for signal transduction network inference from indirect experimental evidence. Journal of Computational Biology, 14:927-949, 2007. [PDF] Keyword(s): systems biology, reaction networks, algorithms, signal transduction networks, graph algorithms.
    Abstract:
    This paper introduces a new method of combined synthesis and inference of biological signal transduction networks. The main idea lies in representing observed causal relationships as network paths, and using techniques from combinatorial optimization to find the sparsest graph consistent with all experimental observations. The paper formalizes the approach, studies its computational complexity, proves new results for exact and approximate solutions of the computationally hard transitive reduction substep of the approach, validates the biological applicability by applying it to a previously published signal transduction network by Li et al., and shows that the algorithm for the transitive reduction substep performs well on graphs with a structure similar to those observed in transcriptional regulatory and signal transduction networks.


  3. P. Berman, B. Dasgupta, and E.D. Sontag. Randomized approximation algorithms for set multicover problems with applications to reverse engineering of protein and gene networks. Discrete Applied Mathematics Special Series on Computational Molecular Biology, 155:733-749, 2007. [PDF] Keyword(s): systems biology, reaction networks, gene and protein networks, systems identification, reverse engineering.
    Abstract:
    This paper investigates computational complexity aspects of a combinatorial problem that arises in the reverse engineering of protein and gene networks, showing relations to an appropriate set multicover problem with large "coverage" factor, and providing a non-trivial analysis of a simple randomized polynomial-time approximation algorithm for the problem.


  4. B. DasGupta, J.P. Hespanha, J. Riehl, and E.D. Sontag. Honey-pot constrained searching with local sensory information. Nonlinear Analysis, 65:1773-1793, 2006. [PDF] Keyword(s): search problems, algorithms, computational complexity.
    Abstract:
    This paper investigates the problem of searching for a hidden target in a bounded region of the plane by an autonomous robot which is only able to use limited local sensory information. It proposes an aggregation-based approach to solve this problem, in which the continuous search space is partitioned into a finite collection of regions on which we define a discrete search problem and a solution to the original problem is obtained through a refinement procedure that lifts the discrete path into a continuous one. The resulting solution is in general not optimal but one can construct bounds to gauge the cost penalty incurred. The discrete version is formalized and an optimization problem is stated as a `reward-collecting' bounded-length path problem. NP-completeness and efficient approximation algorithms for various cases of this problem are discussed.


  5. B. DasGupta and E.D. Sontag. A polynomial-time algorithm for checking equivalence under certain semiring congruences motivated by the state-space isomorphism problem for hybrid systems. Theor. Comput. Sci., 262(1-2):161-189, 2001. [PDF] [doi:http://dx.doi.org/10.1016/S0304-3975(00)00188-2] Keyword(s): hybrid systems, computational complexity.
    Abstract:
    The area of hybrid systems concerns issues of modeling, computation, and control for systems which combine discrete and continuous components. The subclass of piecewise linear (PL) systems provides one systematic approach to discrete-time hybrid systems, naturally blending switching mechanisms with classical linear components. PL systems model arbitrary interconnections of finite automata and linear systems. Tools from automata theory, logic, and related areas of computer science and finite mathematics are used in the study of PL systems, in conjunction with linear algebra techniques, all in the context of a "PL algebra" formalism. PL systems are of interest as controllers as well as identification models. Basic questions for any class of systems are those of equivalence, and, in particular, if state spaces are equivalent under a change of variables. This paper studies this state-space equivalence problem for PL systems. The problem was known to be decidable, but its computational complexity was potentially exponential; here it is shown to be solvable in polynomial-time.


  6. B. DasGupta, H.T. Siegelmann, and E.D. Sontag. On the complexity of training neural networks with continuous activation functions. IEEE Trans. Neural Networks, 6:1490-1504, 1995. [PDF] Keyword(s): machine learning, neural networks, analog computing, theory of computing, neural networks, computational complexity, machine learning.
    Abstract:
    Blum and Rivest showed that any possible neural net learning algorithm based on fixed architectures faces severe computational barriers. This paper extends their NP-completeness result, which applied only to nets based on hard threshold activations, to nets that employ a particular continuous activation. In view of neural network practice, this is a more relevant result to understanding the limitations of backpropagation and related techniques.


  7. H. T. Siegelmann and E.D. Sontag. On the computational power of neural nets. J. Comput. System Sci., 50(1):132-150, 1995. [PDF] [doi:http://dx.doi.org/10.1006/jcss.1995.1013] Keyword(s): machine learning, neural networks, recurrent neural networks, machine learning, analog computing, theory of computing, neural networks, computational complexity, super-Turing computation.
    Abstract:
    This paper deals with finite size networks which consist of interconnections of synchronously evolving processors. Each processor updates its state by applying a "sigmoidal" function to a rational-coefficient linear combination of the previous states of all units. We prove that one may simulate all Turing Machines by such nets. In particular, one can simulate any multi-stack Turing Machine in real time, and there is a net made up of 886 processors which computes a universal partial-recursive function. Products (high order nets) are not required, contrary to what had been stated in the literature. Non-deterministic Turing Machines can be simulated by non-deterministic rational nets, also in real time. The simulation result has many consequences regarding the decidability, or more generally the complexity, of questions about recursive nets.


  8. B. DasGupta, H.T. Siegelmann, and E.D. Sontag. On the Intractability of Loading Neural Networks. In V. P. Roychowdhury, Siu K. Y., and Orlitsky A., editors, Theoretical Advances in Neural Computation and Learning, pages 357-389. Kluwer Academic Publishers, 1994. [PDF] Keyword(s): analog computing, neural networks, computational complexity, machine learning.


  9. H. T. Siegelmann and E.D. Sontag. Analog computation via neural networks. Theoret. Comput. Sci., 131(2):331-360, 1994. [PDF] [doi:http://dx.doi.org/10.1016/0304-3975(94)90178-3] Keyword(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks, neural networks, computational complexity.
    Abstract:
    We consider recurrent networks with real-valued weights. If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomial-time constraints there are limits on their capabilities, though being more powerful than Turing Machines. Moreover, there is a precise correspondence between nets and standard non-uniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. We note that these networks are not likely to solve polynomially NP-hard problems, as the equality "P=NP" in our model implies the almost complete collapse of the standard polynomial hierarchy. We show that a large class of different networks and dynamical system models have no more computational power than this neural (first-order) model with real weights. The results suggest the following Church-like Thesis of Time-bounded Analog Computing: "Any reasonable analog computer will have no more power (up to polynomial time) than first-order recurrent networks."


  10. H. T. Siegelmann and E.D. Sontag. Turing computability with neural nets. Appl. Math. Lett., 4(6):77-80, 1991. [PDF] Keyword(s): machine learning, neural networks, computational complexity, recurrent neural networks.
    Abstract:
    This paper shows the existence of a finite neural network, made up of sigmoidal neurons, which simulates a universal Turing machine. It is composed of less than 100,000 synchronously evolving processors, interconnected linearly. High-order connections are not required. (Note: this paper was placed here by special request. The results in this paper have been by now improved considerably: see the JCSS pape which among other aspects provides a polynomial time simulation. This paper, based on a unary encoding, results in an exponential slowdown).


  11. E.D. Sontag. Controllability is harder to decide than accessibility. SIAM J. Control Optim., 26(5):1106-1118, 1988. [PDF] [doi:http://dx.doi.org/10.1137/0326061] Keyword(s): computational complexity, controllability, computational complexity.
    Abstract:
    The present article compares the difficulties of deciding controllability and accessibility. These are standard properties of control systems, but complete algebraic characterizations of controllability have proved elusive. We show in particular that for subsystems of bilinear systems, accessibility can be decided in polynomial time, but controllability is NP-hard.


Conference articles
  1. E.D. Sontag. From linear to nonlinear: some complexity comparisons. In Proc. IEEE Conf. Decision and Control, New Orleans, Dec. 1995, IEEE Publications, 1995, pages 2916-2920, 1995. [PDF] Keyword(s): theory of computing and complexity, computational complexity, controllability, observability.
    Abstract:
    This paper deals with the computational complexity, and in some cases undecidability, of several problems in nonlinear control. The objective is to compare the theoretical difficulty of solving such problems to the corresponding problems for linear systems. In particular, the problem of null-controllability for systems with saturations (of a "neural network" type) is mentioned, as well as problems regarding piecewise linear (hybrid) systems. A comparison of accessibility, which can be checked fairly simply by Lie-algebraic methods, and controllability, which is at least NP-hard for bilinear systems, is carried out. Finally, some remarks are given on analog computation in this context.


  2. B. DasGupta, H. T. Siegelmann, and E.D. Sontag. On a learnability question associated to neural networks with continuous activations (extended abstract). In COLT '94: Proceedings of the seventh annual conference on Computational learning theory, New York, NY, USA, pages 47-56, 1994. ACM Press. [doi:http://doi.acm.org/10.1145/180139.181009] Keyword(s): machine learning, analog computing, neural networks, computational complexity.


  3. J. L. Balcįzar, R. Gavaldą, H. T. Siegelmann, and E.D. Sontag. Some structural complexity aspects of neural computation. In Proceedings of the Eighth Annual Structure in Complexity Theory Conference (San Diego, CA, 1993), Los Alamitos, CA, pages 253-265, 1993. IEEE Comput. Soc. Press. [PDF] Keyword(s): machine learning, analog computing, neural networks, computational complexity, super-Turing computation, theory of computing and complexity.
    Abstract:
    Recent work by H.T. Siegelmann and E.D. Sontag (1992) has demonstrated that polynomial time on linear saturated recurrent neural networks equals polynomial time on standard computational models: Turing machines if the weights of the net are rationals, and nonuniform circuits if the weights are real. Here, further connections between the languages recognized by such neural nets and other complexity classes are developed. Connections to space-bounded classes, simulation of parallel computational models such as Vector Machines, and a discussion of the characterizations of various nonuniform classes in terms of Kolmogorov complexity are presented.


  4. H.T. Siegelmann and E.D. Sontag. Analog computation via neural networks. In Proc. 2nd Israel Symposium on Theory of Computing and Systems (ISTCS93), IEEE Computer Society Press, 1993, 1993. Keyword(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks.


  5. H.T. Siegelmann and E.D. Sontag. On the computational power of neural nets. In COLT '92: Proceedings of the fifth annual workshop on Computational learning theory, New York, NY, USA, pages 440-449, 1992. ACM Press. [doi:http://doi.acm.org/10.1145/130385.130432] Keyword(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks.


  6. H.T. Siegelmann and E.D. Sontag. Some results on computing with neural nets. In Proc. IEEE Conf. Decision and Control, Tucson, Dec. 1992, IEEE Publications, 1992, pages 1476-1481, 1992. Keyword(s): analog computing, neural networks, computational complexity, super-Turing computation, recurrent neural networks.


  7. H.T. Siegelmann, E.D. Sontag, and C.L. Giles. The Complexity of Language Recognition by Neural Networks. In Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture - Information Processing '92, Volume 1, pages 329-335, 1992. North-Holland. Keyword(s): machine learning, neural networks, computational complexity, machine learning, recurrent neural networks, theory of computing and complexity.


  8. E.D. Sontag. Some complexity questions regarding controllability. In Proc. IEEE Conf. Decision and Control, Austin, Dec. 1988, pages 1326-1329, 1988. [PDF] Keyword(s): theory of computing and complexity, computational complexity, controllability, computational complexity.
    Abstract:
    It has been known for a long time that certain controllability properties are more difficult to verify than others. This article makes this fact precise, comparing controllability with accessibility, for a wide class of nonlinear continuous time systems. The original contribution is in formalizing this comparison in the context of computational complexity. (This paper placed here by special request.)



BACK TO INDEX




Disclaimer:

This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders.




Last modified: Fri Nov 15 15:28:35 2024
Author: sontag.


This document was translated from BibTEX by bibtex2html