Expressive Power of Recurrent Neural Networks Working on

-- default

Time: 00:00

Infinite Input Streams date: 2016-05-27 14:30 slug: 2016-05-27-seminar Authors: Jérémie Cabessa lang:en institution: Université Paris II tags: Plateau seminar location: LIX

Abstract: The computational capabilities of neural network models is known to be tightly related to the kind of activation functions of the neurons, to the nature of their synaptic connections, to the eventual presence of noise in the model, to the possibility for the neural architecture to evolve over time, etc. Here, we provide a characterization of the expressive powers of several models of recurrent neural networks working on infinite input streams. This expressive power turns out to be related to the intricacy of the networks' attractor dynamics. Overall, we show that the deterministic analog and evolving recurrent neural networks recognize the class of Boolean combinations of \(\mathbf{\Pi^0_2}\) \(\omega\)-languages. The nondeterministic analog and evolving recurrent neural networks recognize the class of all \(\mathbf{\Sigma^1_1}\) \(\omega\)-languages.

Category: seminars