ECO
Exact Computing

Le séminaire commun ECO/ESCAPE a lieu le mercredi à 13h30 en salle de séminaire du LIRMM (bâtiment 4).

Thématiques :

Contact(s) : Gwenaël Richomme, Laurent Imbert



Multiplication sécurisée dans les corps finis

Damien Vergnaud, Sorbonne Université
mercredi 9 janvier 2019

TBA

Sieving algorithms for the Shortest Vector Problem

Elena Kirshanova, LIP, ENS Lyon
mercredi 21 novembre 2018

In this talk I give an overview on the complexity of the algorithms for the Shortest Vector Problem (SVP) with the focus on a particular family of algorithms: sieving algorithms. First, I explain why this problem is important and how the complexity of SVP algorithms impacts parameters for cryptographic protocols. Then, I present recent advances in memory efficient versions of sieving algorithms. I explain locality-sensitive techniques for these type of algorithms. The part of the talk is based on joint works with Gottfried Herold and Thijs Laarhoven. Finally, I present recent advances in practical aspects of sieving algorithm for SVP. I describe technical challenges that arise when one tries to make sieving algorithms practical, and how one can overcome some of them. This part of the talk is the on-going work with Martin R. Albrecht, Leo Ducas, Gottfried Herold, Eamonn W. Postlethwaite, Marc Stevens.

On the Ring-LWE and Polynomial-LWE problems

Alexandre Wallet, LIP, École Normale Supérieure de Lyon
mercredi 14 novembre 2018

The Ring Learning With Errors problem (RLWE) comes in various forms. Vanilla RLWE is the decision dual-RLWE variant, consisting in distinguishing from uniform a distribution depending on a secret belonging to the dual of the ring of integers O_K of a specified number field K. In primal-RLWE, the secret instead belongs to O_K. Both decision dual-RLWE and primal-RLWE enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (PLWE), which is not defined using a ring of integers O_K of a number field K but a polynomial ring Z[x]/f for a monic irreducible f in Z[x]. We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be implemented with a small error rate growth for all rings (the resulting reduction is nonuniform polynomial time); we extend it to polynomial-time reductions between (decision/search) primal RLWE and PLWE that work for a family of polynomials f that is exponentially large as a function of deg f (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC 2017] to obtain a search to decision reduction for RLWE. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.

Polynomial Linear System Solving with Errors by Simultaneous Polynomial Reconstruction of Interleaved Reed Solomon Codes

Ilaria Zappatore, LIRMM
mercredi 3 octobre 2018

A classical method for solving systems of linear equations A(x)y(x) = b(x), with polynomial coefficients over Fq[x], consists in evaluating the system at L points and then recovering the solution by interpolation. We consider a scenario where evaluations are done by some black boxes that can output some erroneous values. The number of errors is bounded by an integer E. In a recent work, Boyer and Kaltofen provided a deterministic algorithm that uses a number L1 of points which depends on the error bound E. In this work we reexamine this problem as the generalization of the Simultaneous Polynomial Reconstruction of Reed Solomon codes. We provide a probabilistic algorithm which allows to recover the solution using L< L1 evaluation points.

Polynomial identity testing for bounded depth circuits

Alexey Milovanov, Moscou
mercredi 22 août 2018

Polynomial identity testing (PIT) is the problem of efficiently determining whether a multivariate polynomial is identically equal to zero. The polynomial is given as an arithmetic circuit or an algebraic formula.

There exists a simple polynomial-time randomized algorithm for solving PIT - just testing equality to zero at a ``random'' point. However, a deterministic polynomial-time algorithm solving PIT is unknown. Designing an efficient deterministic algorithm is one of the most challenging open problems in computational complexity (there exist a connection between derandomization of PIT and proving circuit lower bounds).

Despite a deterministic polynomial algorithm for PIT for general algebraic circuits is unknown for some restriction classes of circuits it is possible to derandomize PIT. One of unexpected tool for proving such results is Sylvester-Gallai theorem: if there are finitely many distinct points on the real plane s.t., for every
pair of distinct points, the line through them also contains a third point, then they
all lie on the same line.

I will talk about the latest achievements in this area (including my results for 4-depth circuits).

Construction d'un shift effectif et non sofic de dimension 2 ayant un nombre de motifs sous-exponentiel

Julien Destombes, Escape, LIRMM
mercredi 4 juillet 2018

Alors que la caractérisation des shifts sofics est bien connue en dimension 1, celle-ci est plus floue en dimension supérieure. Les preuves du caractère non sofic des shifts présents dans la littérature s'appuient sur l'existence d'un nombre exponentiel de motifs. Nous proposons ici une construction d'un shift effectif ayant un nombre de motifs polynomial.

La complexité de Kolmogorov et le probleme d'agrément de clé secrète

Andrei Romashchenko, Escape, LIRMM
mercredi 20 juin 2018

We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings $X$ and $Y$ is equal to the length of the longest shared secret key that two parties, one having $X$ and the other one having $Y$, can establish via a probabilistic protocol with interaction on a public channel. This result is also extended to the protocols with more than two parties. (A joint work with Marius Zimand.)

Algorithms for Structured Linear Systems Solving and their Implementation

Romain Lebreton, LIRMM
mercredi 6 juin 2018

There exists a vast literature dedicated to algorithms for structured matrices, but relatively few descriptions of actual implementations and their practical performance in symbolic computation. In this talk, we consider the problem of solving Cauchy-like systems, and its application to mosaic Toeplitz systems, in two contexts: first over finite fields where basic operations have unit cost, then over Q. We introduce new variants of previous algorithms and describe an implementation of these techniques and its practical behavior. We pay a special attention to particular cases such as the computation of algebraic approximants.

General purpose 2D computing medium

Frédéric Gruau, LRI
vendredi 6 avril 2018

The theoretical-model of our computers: Turing machine offers UNBOUNDED MEMORY. The Von-Neuman computer architecture is the efficient implementation in real hardware. It does not APRIORI BOUND the memory. In fact, memory has become so huge that we tend to think of it as an INEXHAUSTIBLE SOURCE.
In this talk, we propose an alternative synergy theoretical-model$/$architecture, for offering UNBOUNDED COMPUTING POWER.
The theoretical model consist of interconnected agents executing instructions that can create other agents. During the execution of a program, a network of agents grows from a single ancestor. The computing power therefore also grows, without limit.
The architecture, should not APRIORI BOUND the number of Processing Element (PE): it can be a thousand, or \ldots a thousand billions! For such scalable architecture, we achieve performance on two levels: 1- theoeretical : The time and space complexity of a parallel algorithm grows optimaly, as the number of PEs grows. 2- practical: the constant factor behind the complexity formula, is small enough that it can be demonstrated for concrete algorithms and significanty large data, on a standard CPU; We do it for sorting, and matrix multiplication, with a thousand scalars.

Because PEs come as an INEXHAUSTIBLE SOURCE, we have to give up on programming each PE individually. Instead, we consider this arbitrary large ensemble of PE as a ``2D computing-medium'': PEs are fine grain, behave identically, and communicate only with nearest neighbor in physical space. For simplification, we use the simplest kind of computing-medium wich is Cellular Autotata (CA): PEs are CA cells, with small state, north, south, east, west connection, and synchronized update. More generally, one could use amorphous computing-medium which are simply homogeneous isotrope, and quasi-synchrone.

We map the unbounded-power model onto the unbounded-power hardware. In other words, we simulate the development of an agent-network, on a CA, using a (very complex) CA-transition-rule. %That, is the most original and central feature of the approach.
The memory of each PE , stores the same dozen of boolean flag representing spatially extended, physicical-like structures, among which are ``membrane'' ``waves'' and ``channels''.
By associating colors to structures, each PE gets a specific color; The state of the whole hardware is drawn as a 2D picture, with one pixel for each PE. When the CA iterates, state and therefore pixel-color, change. We can see the computation which is going on, as a film. ``Membrane'', ``channels'' and ``waves'' move in 2D space. What really happen is resetting the structure flag on a PE, and setting it on an adjacent PE.
The movements follows physic-like forces of repulsion, or attraction, with two main goals:

1-homogeneize the density of membranes, and

2-divide membranes by pinching at the equator.

Each membrane defines the support of one agent, so dividing a membrane really means creating a new agent.
Homogeneization acts constantly. In contrast, division is triggered by instructions sent by the host.
A division instruction optionnaly inserts a ``channel'' between the two offspring membrane, which represents a connection between agents. The movement of channel, membrane and waves is constrained, to enforce the topological invariant corresponding to their role: membranes remain closed loops, channels remain adjacent to the two connected membranes, waves sweep the entire CA. The computation of each structure's final movement is all the more complex that structure are defined with respect to one another, in a hierarchical way. Constraint and move needs to be propagated upward and downward this hierarchy, for cross-coherence.

A sequence of division-instructions develops a specific network of agents. It represent a program. It is loaded from the host on the CA, by propagating a train of ``waves'' carrying instruction. The CA is a ``General Purpose'' CA (GPCA), because when fed with a different sequence of instructions, it will develop a different network. Furthermore, developing networks is really an expressive parallel programming technique. For example, unlike static circuit, dynamic circuit can capture algorithm whose dependance graph is data-dependant.

The GPCA is unusually highly complex, with 64 bits of state and 20,000 gates. We use a CA-platform, with a language, a compiler, an optimizer and a debugger that allow to program and execute very rapidly, CAs of unprecented complexity. The CA-platform opens a new avenue to start exploring the wide and intricate world of complex CA.

Integer Polynomial Sparse Interpolation with Near-Optimal Complexity

Dan Roche, Computer Science Dept. US Naval Academy
mercredi 14 mars 2018

We investigate algorithms to discover the nonzero coefficients and exponents of an unknown sparse polynomial, provided a way to evaluate the polynomial over any modular ring. This problem has been of interest to the computer algebra community for decades, and its uses include multivariate polynomial GCD computation, factoring, and sparse polynomial arithmetic. Starting with the early works of Zippel, Ben-Or and Tiwari, and Kaltofen, one line of investigation has a key advantage in achieving the minimal number of evaluations of the polynomial, and has received considerable attention and improvements over the years. It is closely related to problems in coding theory and exponential analysis. The downside, however, is that these methods are not polynomial-time over arbitrary fields. A separate line of work starting with Garg and Schost has developed a different approach that works over any finite field. After years of improvements, the complexity of both approaches over ZZ[x] is currently the same. They scale well in most aspects except for the degree; the bit complexity in both cases is currently cubic in the bit-lengths of the exponents. By careful combination of the techniques in both approaches and a few new tricks, we are now able to overcome this hurdle. We present an algorithm whose running time is softly-linear in the size of the output and performs nearly the minimal number of evaluations of the unknown polynomial. Preliminary implementation results indicate good promise for practical use when the unknown polynomial has a moderate number of variables and/or large exponents.

Some algorithms for matrices with displacement structure

Claude-Pierre Jeannerod, LIP, ENS Lyon
mercredi 7 mars 2018

The notion of displacement rank provides a uniform way to deal with various classical matrix structures (such as Toeplitz, Hankel, Vandermonde, Cauchy) and allows for various generalizations. In this talk, we will review some recent progress in the design of asymptotically fast algorithms for multiplying and inverting such matrices. In particular, for n by n matrices of displacement rank alpha, we will see how to reduce the cost of such operations from about alpha^2*n (up to log factors) down to alpha^(w-1)*n, where w < 2.373 is the exponent of (dense, unstructured) matrix multiplication. (Joint work with A. Bostan, C. eMouilleron, and E. Schost.)

Calcul rapide des séries de Puiseux

Adrien Poteaux, LIFL, Université Lille I
mercredi 28 février 2018

Les séries de Puiseux sont un outil essentiel pour l'étude de singularité de courbes algébriques planes. Cet exposé présentera des améliorations de l'algorithme de Newton Puiseux, qui permet de calculer de telles séries. Nous décrirons une stratégie symbolique-numérique pour calculer une approximation numérique de ces séries avec une structure exacte. Puis nous donnerons les grandes idées ayant permis d’améliorer la complexité arithmétique de l’algorithme. Nous concluerons en montrant les limites atteintes par cet algorithme, et évoquerons des pistes qui pourraient les contourner dans le futur.

qDSA: Efficient, compact key exchange and signatures for IoT from elliptic curves and Kummer surfaces

Benjamin Smith, LIX, Inria Saclay
mercredi 14 février 2018

The Internet of Things (IoT) means that an exploding number of small connected devices are increasingly pervading almost every aspect of our daily lives. Securing these devices is extremely important, but it is also a great technical challenge: most of our state-of-the-art cryptographic algorithms are simply too "heavy" to run on these tiny devices with such limited computational and energy resources. This is especially true for asymmetric techniques like key exchange and digital signatures, which by their very nature require intensive algebraic computations.

This talk will describe the qDSA signature scheme, and explain how we have applied the classic geometric theory of Kummer varieties to produce compact, low-memory, high-speed software for high-security key exchange and digital signatures on microcontrollers with very limited memory.

(Joint work with Joost Renes, Radboud University Nijmegen, NL)

Multidimensional SFTs with countably many configurations

Ilkka Törmä, University of Turku, Finland
mercredi 18 octobre 2017

A recurring theme in the theory of tilings in the construction of tile sets whose associated shift spaces are in some sense as restricted as possible. Aperiodic SFTs are one example: all configurations are constrained to have no global period. We consider another restriction, namely, that the set of valid configurations is countable. Such tile sets cannot be aperiodic, but the geometry of their configurations is not as constrained as one might think. In particular, it is possible to carry out some hierarchical constructions.

Mealy automata: dynamic of the action, singular points and Wang tilings

Thibault Godin, IRIF
mercredi 11 octobre 2017

Mealy automata are a powerful tool to generate (semi)groups. Since the 80's, they have been used to solved several major conjectures in (semi)group theory. Although, people started to study them from a (theoretical) computer science perspective only recently. This approach is rich and brought many interesting results and questions.
In this talk, we focus on the dynamic of the action of the group on the regular tree and we show that classical tool from automata theory can be used to simplify and generalize known results and to provide information on this action. Then, we study a closely related problem and use a connection between Mealy automata and Wang tilings to obtain an undecidability theorem.

This is based on a joint work with D. D'Angeli, I. Klimann, M. Picantin, and E. Rodaro.

Stopping time complexity

Alexander Shen, LIRMM, Escape
mercredi 4 octobre 2017

Last year Vovk and Pavlovic suggested the definition of stopping time complexity: how many bits of information is needed to specify when to stop while reading an infinite sequence of bits. (They had some philosophical reasons to introduce this definition, but technically their suggestion can be described in this way.) It turns out that this notion of complexity is a special case of a general scheme of conditional complexities (different topological structures on conditions, programs and descriptions lead to different complexity): the stopping time complexity equals $C(x|x*)$ where $x$ in the condition is understood as information about prefix of a sequence. We'll consider also some other equivalent definitions of the stopping time complexity

On some interesting ternary formulas

Pascal Ochem, ALGCO, LIRMM
mercredi 27 septembre 2017

We show that, up to renaming of the letters, the only infinite ternary words avoiding the formula $ABCAB.ABCBA.ACB.BAC$ (or $ABCAB.BCB.AC$) have the same set of recurrent factors as the fixed point of
$0 \to 012$
$1 \to 02$
$2 \to 1$.
Also, we show that the formula $ABAC.BACA.ABCA$ is 2-avoidable.
This leads to disproving a conjecture of Grytczuk.

A Characterization of Infinite LSP Words

Gwenaël Richommme, LIRMM, Escape
mercredi 12 juillet 2017

G. Fici proved that a finite word has a minimal suffix automaton if and only if
all its left special factors occur as prefixes. He called LSP all finite and infinite words having this latter property. We characterize here infinite LSP words in terms of $S$-adicity. More precisely we provide a finite set of morphisms $S$ and an automaton ${\cal A}$ such that an infinite word is LSP if and only if it is $S$-adic and all its directive words are recognizable by ${\cal A}$.

Factoring integers with ECM on the Kalray MPPA-256 processor

Jérémie Detrey, LORIA, inria Nancy
mercredi 28 juin 2017

The Kalray MPPA-256 is a recent low-power chip which embeds 256
32-bit cores. As such, it can be used as an efficient co-processor
for accelerating computations, bridging the gap between usual CPUs
and the more throughput-oriented GPUs, all the while consuming far
less power.

In this talk, we will present a fast and energy-efficient
implementation of the Elliptic Curve Method (ECM, an algorithm for
factoring integers) on this platform. After a brief presentation of
the ECM and of its important role in cryptanalysis, especially in
the context of factoring RSA keys, we will glance at some of the key
architectural features of the MPPA-256. We will then present an
optimized library for multiprecision modular arithmetic, on top of
which the ECM implementation was built, before concluding with a few
benchmarks and comparisons with the state of the art.

This is a joint work with Masahiro Ishii, Pierrick Gaudry, Atsuo
Inomata, and Kazutoshi Fujikawa.

On the direct product of periodic and quasiperiodic sequences

Andrei Romaschchenko, LIRMM, Escape
mercredi 7 juin 2017

It is known (Avgustinovich, Fon-Der-Flaass, Frid) that a direct product of any periodic bi-infinite sequence $X$ with any quasiperiodic (weakly or strongly recurrent) sequence $Y$ is also quasiperiodic (weakly or strongly recurrent respectively). We will present several generalizatons of this combinatorial lemma and discuss its applications in the study of multidimensional SFT.

Probabilistic analysis of randomness: Sturmian words and continued fractions

Eda Cesaratto, Universidad Nacional de General Sarmiento, Buenos Aires, Argentine
mercredi 24 mai 2017

(Séminaire commun avec le séminaire du Pôle Algo-Calcul)

Sturmian words and continued fractions are representation of numbers that share the fact of being codifications of orbits of dynamical systems.
Probabilistic analysis of parameters, such as the frequency of digits, for the continued fraction expansion of real numbers is now a classical subject of research.

Sturmian words are related to the celebrated Kronecker sequence K(x) of the fractional
parts of the multiples of a real x, and contrary to the continued fraction context, probabilistic analyses are not frequent in the literature.

We perform a probabilistic study of the well-known discrepancy and of the Arnold measure, that can be viewed as measures of randomness, of the sequence K(x). This study gives insight into the pseudo-randomness of the sequence K(x)
for a random input x. We consider two main cases: the case where the ``input'' x is a random real, the case when x is a random rational and exhibit similarities between them.

Joint work with Brigitte Vallée (CNRS, U. de Caen)

Randomness and Kolmogorov extractors with multiple sources

Daniil MUSATOV, Moscow Physics-Technical University
mercredi 17 mai 2017

A randomness extractor is a function that produces "better" or "new" randomness from weak random sources. Extractors with multiple sources get two (or more) independent random variables with sufficiently high min-entropy and output a random variable close to uniform. A Kolmogorov extractor is a function that formalizes the same notion in terms of Kolmogorov complexity. Namely, it gets two strings with sufficiently high Kolmogorov complexity and sufficiently small mutual information and produces a string with complexity close to length. Hitchcock, Pavan and Vinodchadran have shown that randomness and Kolmogorov extractors are essentially the same objects: each extractor of one type is also an extractor of the other type with slightly worse parameters. Recently, new breakthrough explicit constructions of randomness extractors were invented by Chattopadhyay, Zuckerman and Li. We discuss their applications to the theory of Kolmogorov extraction. In particular, the new extractor for three sources appears to be a Kolmogorov extractor for time-bounded Kolmogorov complexity. As far as the speaker knows the case of three sources was not previously considered explicitly. For the more convential case of two sources we probably need even more sophisticated constructions.

Le problème de décomposition de points dans les variétés Jacobiennes hyperelliptiques

Alexandre Wallet, LIP, ENS Lyon
mercredi 17 mai 2017

Le Problème du Logarithme Discret (DLP) dans les variétés abéliennes définies sur des corps finis est réputé difficile en général. Par exemple, son instantiation sur les courbes elliptiques sert de base de sécurité pour des primitives cryptographiques bien connues. En effet, on ne connait actuellement aucune méthode efficace pour calculer des logarithmes discrets elliptiques, hormis pour des classes très restreintes de courbes elliptiques. Un autre famille intéressante de variétés abéliennes est celle des variétés Jacobiennes associées aux courbes algébriques. D'une part, certains algorithmes plus efficaces sont connus pour la résolution du DLP sur ces structures ; d'autre part il est parfois possible de transférer un DLP elliptique sur un DLP dans une variété Jacobienne, ce qui peut avoir des implications dans les choix de paramètres cryptographiques.

L'exposé présentera des résultats améliorant les algorithmes de calcul des logarithmes discrets dans les variétés Jacobiennes des courbes hyperelliptiques, qui sont les courbes les plus proches des courbes elliptiques. En particulier, les améliorations obtenues permettraient la réalisation complète d'un calcul de logarithme discret significatif (184 bits) sur une courbe "binaire" de genre 2. Un tel calcul était inenvisageable avant, et se compare à des records récents (EuroCrypt 2017) de DLP sur les corps finis. Je présenterai aussi quelques travaux plus théoriques visant à exploiter des propriétés structurelles supplémentaires, ainsi que leurs limites. Selon le temps disponible, je pourrai aussi présenter une contribution plus générale au calcul de logarithme discret pour n'importe quelle courbe algébrique.

Reconstruction algorithms for sums of affine powers

Timothée Pecatte, LIP, Lyon
mercredi 19 avril 2017

A sum of affine powers is an expression of the form $f(x) = a_1 * (x - b_1)^e_1 + … + a_s * (x - b_s)^e_s$.
Although quite simple, this model is a generalization of two well-studied models: Waring decomposition and sparsest shift.
For these three models there are natural extensions to several variables,
but this talk will be focused on univariate polynomials.
We will present efficient algorithms for reconstructing the smallest expression for an input polynomial $f$.
These algorithms are built on techniques developed for proving lower bounds on the number of terms $s$ needed to represent a polynomial

Machines à temps infini et ordinaux admissibles

Bruno Durand, LIRMM, Escape
mercredi 8 mars 2017

A préciser

Incompressibility, Kolmogorov complexity and normal numbers

Alexander Shen, LIRMM, Escape
mercredi 1 mars 2017

Normal number is a real number when every block of $k$ bit has the
same limit frequency in the binary representation. They have a
characterization (V.Becher, P.Heiber, 2013) in terms of
incompressibility by finite automata. However, this characterization
does not follow the standard scheme for algorithmic complexity. We
explain that it can be restated (and simplified) by considering
automata as decompressors, not compressors; this gives also simple
proofs for classical results about normal numbers (normal number
remains normal when multiplied by a rational, etc.)

A simple proof of (deterministic) #P-completeness of counting roots in finite fields

Aleksei Milovanov, Moscow state university
mercredi 25 janvier 2017

A counting problem is #P-complete if every problem in #P (counting objects that have some easily decidable property) can be reduced to it. This definition makes sense for different reduction notions. It was known for a long time (J. von zur Gathen, M. Karpinski and I.Shparlinski, Counting curves and their projections, Computational Complexity, 6, 64--99, 1996) that counting solutions of a polynomial equation P(x,y)=0 in F_{p^n} is #P-complete with respect to probabilistic reductions if polynomial is given in a sparse representation (zero coefficients are omitted). We provide a simple proof that improves this result (polynomials are univariate, and reductions are deterministic)

Are there non-stochastic objects?

Nikolay K. Vereshchagin, Moscow State University
mercredi 25 janvier 2017

An object (encoded by a binary string x) is called stochastic if there is a simple plausible statistical explanation for it. As explanations we consider probabilistic Turing machines M whose running time is bounded by a polynomial of the length of the output. We call such a machine M a plausible explanation for x if every property of x recognized by a polynomial time deterministic Turing machine with a short program is shared by a non-negligible fraction of M's outputs. We call x stochastic if there is a plausible explanation M for x such that the program of M is short, i.e., the explanation is simple.

Are all objects stochastic? Perhaps not. For instance, structured objects like paintings might be non-stochastic. We will prove that under some complexity-theoretic assumption there are infinitely many objects that are not stochastic.The assumption states that there is a unary language in NP that is not recognized by any polynomial time probabilistic machine. (A joint work with A. Milovanov.)

An Algebraic Geometric Approach to Multidimensional Symbolic Dynamics

J. Kari, Univ. Turku, Finlande
mercredi 7 décembre 2016

We study low complexity multidimensional words and subshifts using tools of algebraic geometry. We express the word as a multivariate formal power series over integers and investigate the setup when there is an annihilating polynomial: a polynomial whose product with the power series is zero. We prove that the word must then be a sum of periodic words over integers, possibly with unbounded values. As a specific application of the method we obtain an asymptotic version of the well-known Nivat's conjecture.

Subtraction-free computations and cluster algebras

Dima Grigoriev, Laboratoire Paul Painlevé, Université Lille 1
mercredi 7 septembre 2016

Using cluster transformations we design
subtraction-free algorithms for computing Schur polynomials and for
generating spanning trees and arborescences polynomials. The latter
provides an exponential complexity gap between circuits admitting
arithmetic operations $+, \, \times, \, /$ versus $+,\, \times$. In
addition, we establish an exponential complexity gap between
circuits admitting $+,\, -,\, \times,\, /$ versus $+, \, \times, \,
/$. Together with V. Strassen's result on "Vermeidung von
Divisionen" this closes a long-standing problem on comparative
complexity power between all possible subsets of operations $+,\,
-,\, \times,\, /$.

(a joint work with S. Fomin, G. Koshevoy)

Determining sets of quasiperiods of infinite words

Guilhem Gamard, Escape, LIRMM
mercredi 13 juillet 2016

A word is quasiperiodic if it can be obtained by concatenations and
overlaps of a smaller word, called a quasiperiod. Based on links
between quasiperiods, right special factors and square factors, we
introduce a method to determine the set of quasiperiods of a given
right infinite word. Then we study the structure of the sets of
quasiperiods of right infinite words and, using our method, we
provide examples of right infinite words with extremal sets of
quasiperiods (no quasiperiod is quasiperiodic, all quasiperiods
except one are quasiperiodic, \ldots). Our method is also used to
provide a short proof of a recent characterization of quasiperiods
of the Fibonacci word. Finally we extend this result to a new
characterization of standard Sturmian words using a property of
their sets of quasiperiods.

Tight upper bound on splitting by linear combinations for pigeonhole principle

Vsevolod Oparin, St. Petersburg University of the Russian Academy of Sciences.
mercredi 29 juin 2016

SAT is a well-known problem to identify if a given boolean formula has a satisfying assignment. Main technique in solving SAT is DPLL-algorithms. A usual DPLL-algorithm chooses a single variable and order of values to substitute, and run recursively on simplified formulas seeking for a satisfying assignment. We consider an extension of the algorithm which chooses not a single variable, but a linear form modulo 2. There is a known result that formulas, encoding pigeonhole principle on $n$ pigeons and $m$ holes ($PHP_n^m$), still is hard even for linear splitting and requires $2^{\Omega(n)}$ time of work. The known bound for a usual DPLL-algorithm without linear splittings is $2^{\Theta(n \log n)}$. In this talk we show that $PHP_n^m$ can be solved in $2^{O(n)}$ time with linear splittings, and, as a corollary, formulas, encoding perfect matching principle on graphs with $n$ vertices, can be also solved in $2^{O(n)}$.

A Linear Acceleration Theorem for 2D Cellular Automata on all Complete Neighborhoods

Anaël Grandjean, Escape, LIRMM
mercredi 29 juin 2016

Linear acceleration theorems are known for most computational models. Although such results have been proved for two-dimensional cellular automata working on specific neighborhoods, no general construction was known. We present here a technique of linear acceleration for all two-dimensional languages recognized by cellular automata working on complete neighborhoods.

Avoidability of formulas with two variables

Pascal Ochem, LIRMM, AlGCo
mercredi 22 juin 2016

In combinatorics on words, a word $w$ over an alphabet $\Sigma$ is said to avoid a pattern $p$ over an alphabet $\Delta$ of variables if there is no
factor $f$ of $w$ such that $f = h(p)$ where h : $\Delta^* \to \Sigma^*$ is a non-erasing
morphism. A pattern $p$ is said to be $k$-avoidable if there exists an infinite
word over a $k$-letter alphabet that avoids $p$. We consider the patterns
such that at most two variables appear at least twice, or equivalently, the
formulas with at most two variables. For each such formula, we determine
whether it is $2$-avoidable.

Logic for Algorithms

Rod Downey, Victoria University, Wellington, New Zealand
mercredi 15 juin 2016

Graphs and similar objects which can be coded by reasonable parse
languages allow for algorithmic metatheorems to demonstrate parametric
tractability. This talk will review some of the high points in this area. It
will also discuss recent application of this idea to low dimensional topology
by the author and Benjamin Burton.

Introduction au calcul uniforme : élection de leader sur les automates cellulaires à entrée périodique

Nicolas Bacquey , GREYC, Caen
mercredi 11 mai 2016

Tous les modèles de calcul usuels présentent une "singularité" qui permet de démarrer un calcul : tête d'une machine de Turing, état initial, cellule au bord de la configuration d'un automate cellulaire... Est-il possible de définir une notion de calcul sur un modèle dénué d'un maximum de ces singularités ? Comment présenter l'entrée, comment lire un résultat ? Comment définir une complexité ?

On tentera de répondre à ces questions dans le modèle de calcul des automates cellulaires à configuration périodique, modèle dans lequel à la fois les données et le programme sont uniformes dans l'espace. Plus précisément, on présentera une famille d'algorithmes réalisant l'élection de leader sur ces configurations périodiques. Il est impossible d’élire une unique cellule de la configuration par définition ; les algorithmes présentés utilisent des techniques permettant de transformer des différences spatiales en différences temporelles, induisant des ruptures de symétrie. Ainsi, on parvient à élire une unique classe d'équivalence de cellules de la configuration.

On Machines and Melodies - An Invitation to Infinitary Computability

Merlin Carl, Université de Constance Allemagne
mercredi 27 avril 2016

Turing computability provides an intuitive and attractive formal
framework for the analysis of the question which mathematical objects
and functions can be generated by applying a certain mechanical
procedure finitely many times. However, in mathematical practice,
we often encounter examples of objects generated by infinite processes
involving limits, such as adding reals, computing with limits of real
sequences, forming direct limits of directed systems, hulls, closures
etc. This motivates the study of infinitary computations that can account
for the limit phenomenon in such processes and should provide
a framework for extending results from recursion theory to infinitary
mathematical objects.

I will introduce some of the most prominent machine models of
infinitary computability and give an overview of results obtained about
them.

Stable states of perturbed Markov chains

Stéphane Le Roux, Université Libre de Bruxelles
vendredi 8 avril 2016

Given an infinitesimal perturbation of a discrete-time finite Markov chain, we seek the states that are stable despite the perturbation, i.e.the states whose weights in the stationary distributions can be bounded away from $0$ as the noise fades away. Chemists, economists, and computer scientists have been studying irreducible perturbations built with exponential maps. Under these assumptions, Young proved the existence of and computed the stable states in cubic time. We fully drop these assumptions, generalize Young's technique, and show that stability is decidable as long as $f \in O(g)$ is. Furthermore, if the perturbation maps (and their multiplications) satisfy $f \in O(g)$ or $g \in O(f)$, we prove the existence of and compute the stable states and the metastable dynamics at all time scales where some states vanish. Conversely, if the big-O assumption does not hold, we build a perturbation with these maps and no stable state. Our algorithm also runs in cubic time despite the general assumptions and the additional work. Proving the correctness of the algorithm relies on new or rephrased results in Markov chain theory, and on algebraic abstractions thereof.

Fast computation of minimal interpolation bases in Popov form for arbitrary shifts

Vincent Neiger, LIP, ENS Lyon
mercredi 30 mars 2016

Recently, [Zhou - Labahn, 2012] gave an algorithm for Hermite-Padé approximation that can efficiently compute a polynomial matrix whose rows form a basis of all solutions and whose degrees are minimal for some prescribed shift; for efficiency, they require an assumption on the shift. In this talk, we propose an algorithm which is similarly efficient, makes no assumption on the shift, and deals with the more general problem of computing minimal interpolation bases. Those represent bases of solutions for the rational interpolation problem in [Beckermann - Labahn, 2000]; it encompasses for example Hermite-Padé approximation, M-Padé approximation, and the bivariate interpolation step in the Guruswami-Sudan algorithm.

Joint work with Claude-Pierre Jeannerod, Éric Schost, and Gilles Villard.

Code generator matrices as entropy extractors

Alessio Meneghetti, Universita di Trento, Italia
mercredi 17 février 2016

There is a connection between the Walsh spectrum of the output of a binary random number generator (RNG) and the bias of individual bits. This can be used to show how previously known bounds on the performance of linear binary codes as entropy extractors can be derived by considering generator matrices as a selector of a subset of that spectrum. It can also be proved a connection with the code’s weight distribution, which implies a new bound on the performance of a linear extractor.

Efficient Universal Computation by Greedy Molecular Folding

Nicolas Schabanel, LIAFA, Paris
mercredi 10 février 2016

Joint work with: Cody Geary, Pierre-Étienne Meunier and Shinnosuke Seki

We introduce and study the computational power of Oritatami, a theoretical model to explore greedy molecular folding, by which the molecule begins to fold before waiting the end of its production. This model is inspired by our recent experimental work demonstrating the construction of shapes at the nanoscale by folding an RNA molecule during its transcription from an engineered sequence of synthetic DNA. While predicting the most likely conformation is known to be NP-complete in other models, Oritatami sequences fold optimally in linear time. Although our model uses only a small subset of the mechanisms known to be involved in molecular folding, we show that it is capable of efficient universal computation, implying that any extension of this model will have this property as well.

We develop several general design techniques for programming these molecules. Our main result in this direction is an algorithm in time linear in the sequence length, that finds a rule for folding the sequence deterministically into a prescribed set of shapes depending of its environment. This shows the corresponding problem is fixed-parameter tractable although we proved it is NP-complete in the number of possible environments. This algorithm was used effectively to design several key steps of our constructions.

On aperiodic tilings of the plane by polygons

Nikolai Vereshchagin, Moscow University
mercredi 20 janvier 2016

We consider some families of self-similar tilings of the plane by triangles, rectangles and hexagons. Each family we consider corresponds to a substitution scheme. We investigate which of those families of tilings can be defined by local rules.

On algorithms for integer factorization and discrete logarithms computation

Cyril Bouvier, IMB, Université de Bordeaux
mercredi 16 décembre 2015

In this talk, I will present some results obtained during my Ph.D on integer factorization and discrete logarithm computation in finite fields. First, I will present the ECM algorithm for integer factorization and a new method to analyze the elliptic curves used in this algorithm by studying the Galois properties of division polynomials. Then, I will talk about the NFS algorithm for integer factorization and in particular the polynomial selection step for which I will propose improvements of existing algorithms. Finally, I will talk about a common step of the NFS algorithm for integer factorization and the NFS-DL and FFS algorithms for discrete logarithm computations: the filtering step. I will explain this step thoroughly and present an
improvement for which I will study the impact using data from several computations of discrete logarithms and factorizations.

Kolmogorov complexity versions of busy beavers

Mikhail Andreev, Moscou
mercredi 16 décembre 2015

Busy beaver function was introduced by Tibor Rado as maximum number which can be printed by Turing Machine with at most N internal states over {0,1} alphabet. In complexity theory analogue of busy beaver number was introduced by Gacs as maximum number with Kolmogorov complexity at most N. This function is defined up to additive term in argument (as well as complexity is defined up to additive term). One can use different types of Kolmogorov complexity to get different busy beaver functions. In this talk I'm going to give formal definitions of complexity busy beaver for plain and prefix-free Kolmogorov complexity, discuss some basic properties of this functions and relation between them.

Problème de l’arrêt et son approximatisation

Alexander Shen, Escape, LIRMM
mercredi 9 décembre 2015

Often we say that "for any reasonable programming language [this or that]". But what is "reasonable" here? There are several possible requirements: one of them is s-m-n-theorem, or G\"odel property: any other programming language can be efficiently translated. It is not obvious that this guarantees the invariance of the properties we care about: for example, it is not obvious (but true) that for every program we can effectively write infinitely many equivalent programs (we can add comments in a standard programming language, but the translator could stard by deleting them). The Rogers equivalence theorem helps by proving that every two G\"odel programming languages are related by a comptable bijection of programs.

There are some properties for which this is not enough. For example, one may ask whether the halting problems is generically solvable, i.e., whether there is an algorithm that correctly solves it for a fraction of inputs that asymptotically is $1$. Here for some programming languages the answer is trivial --- e.g., if we require that program file that contains "control-Z" immediately terminates, then asymptotically almost all program terminate. (More interesting result of the same time - most of the Turing machines with N states on a one-sided tape fall of the tape and terminate (Hamkins, Myasnikov).) Still there is a natural requirement for the programming language ("efficient optimality", the existence of a translator that increases the length only by $O(1)$), and the corresponding invariance result due to Schnorr.

For the halting problem, however, a weaker property of optimality (without efficiency requirement) is enough, and there is a simple reason related to Chaitin-like $\Omega$-numbers: for an optimal programming language the number of terminating programs of length at most $n$ has complexity $n-O(1)$. This explains why the halting problem is not generically solvable for all optimal languages.

Capacité valuative d'un ensemble de mots infinis

Youssef Fares, LAMFA, Amiens
mercredi 25 novembre 2015

En 1997 Manul Bhargava a donné une généralisation de la notion de factorielle. Cette notion est liée à la notion de suite $p$-ordonnée introduite aussi par Bhargava. En utilisant cette dernière notion, on peut associer à chaque partie $E$ de $Z_p$ une suite appelée suite caractéristique de $E$. Cette suite n'est, en général, pas facile à déterminer.
En considérant un ensemble $M$ de mots infinis (avec un alphabet fini) comme une partie de $Z_p$, on peut s'intéresser
à la suite caractéristique de $M$ et à sa limite asymptotique.
Le but de l'exposé est de rappeler toutes ces notions, de donner des propriétés simples liées à ces notions et d'étudier deux cas particuliers:
1) le cas du sous-shift de type fini
2) Le cas d'un ensemble de mots sturmiens.

Une version combinatoire du problème de Slepian et Wolf

Andrei Romashchenko, Escape, LIRMM
mercredi 14 octobre 2015

(joint work with Daniyar Chumbalov, EPFL)

We study the following combinatorial version of the Slepian-Wolf coding scheme. Two isolated Senders are given binary strings $X$ and $Y$ respectively; the length of each string is equal to $N$, and the Hamming distance between the strings is bounded by some threshold $R$. The Senders compress their strings and communicate the results to the Receiver. Then the Receiver must reconstruct both strings $X$ and $Y$. The aim is to minimise the lengths of the transmitted messages.

This problem lies on the border line between coding theory and communication complexity, and to cope with it we need tools from both fields. We prove nontrivial lower bounds for a deterministic variant of this problem with syndrome-coding, i.e., for the case where at least one of the Senders uses linear encoding. For the probabilistic setting (with randomised parties) we find the theoretical optimum of communication complexity and construct an effective (polynomial time) protocol with optimal lengths of messages.

Doubled patterns are 3-avoidable

Pascal Ochem, AlGCo, LIRMM
mercredi 30 septembre 2015

A pattern is doubled if all of its variables appear at least twice.
In the last talk, we conjectured that the so-called "power series method" and "entropy compression method" are equivalent.
The conjecture is now proven and we use the unified method to show
that most of the doubled patterns are avoidable over the 3-letter alphabet.
There remain 10 doubled patterns that are shown to be 3-avoidable with the use of morphisms.

Tabulating Class Groups of Quadratic Fields

Michael Jacobson, Université de Calgary
mercredi 23 septembre 2015

Class groups of quadratic fields have been studied since the time of Gauss, and in modern times have been used in applications such as integer factorization and public-key cryptography. Tables of class groups are used to provide valuable numerical evidence in support of a number of unproven heuristics and conjectures, including those due to Cohen and Lenstra. In this talk, we discuss recent efforts to extend existing, unconditionally correct tables of both imaginary and real quadratic fields.

Remarque : cet exposé du séminaire Eco-Escape se déroulera dans le cadre plus large du séminaire du pôle algo-calcul

Randomness deficiencies: quantifying randomness

Gleb Novikov, Moscow
mercredi 26 août 2015

Algorithmic randomness divides binary sequences into random (believable as coin tossing outputs) and non-random. Adding a bit to a random sequence, we get a random sequence, so adding billion of zeros also preserves randomness. However, this sequence is ``less random''. Randomness deficiency captures this difference: a sequence is random iff randomness deficiency is finite, but it can be large or small. The classical results (e.g., van Lambalgen pair randomness theorem) have quantitative versions that are inequalities for randomness deficiency. We consider several examples of this kind.

Antistochastic strings and their applications

Aleksei Milovanov, Moscow
mercredi 26 août 2015

Kolmogorov complexity of a string is defined as the minimal length of a program that generates it. This shortest program, however, could require a lot of time to produce the string, while longer programs may be faster. Antistochatic strings are strings for which this trade-off effect happens in the strongest form. It turns out that they have interesting properties that can be used to prove some results in algorithmic information theory and coding theory.

Temps de non calcul des machines de Turing à temps infini

Sabrina Ouazzani, Escape, LIRMM
mercredi 24 juin 2015

Présentation générale des machines de Turing à temps infini puis étude
de quelques propriétés ordinales de clockabilité et d'admissibilité.

Quasiperiodicity and non-computability in tilings

Andrei Romashchenko, Escape, LIRMM
mercredi 20 mai 2015

We study tilings of the plane that combine strong properties of different nature: combinatorial (quasi-periodicity) and algorithmic (non-recursivity). Thus, we come up with a tile set that accepts only quasi-periodic and non-recursive tilings, and discuss some generalisations of this construction.

Synchronisation et automates cellulaires

Gaétan Richard, Greyc, Caen
mercredi 13 mai 2015

Dans l’optique de regarder la robustesse des systèmes complexes, un phénomène qui semble intéressant est la synchronisation: partant de n’importe quel état global, tous les éléments arrivent à se synchroniser. Dans cet exposé, nous nous appuierons sur le modèle discret et régulier des automates cellulaires unidimensionnels afin de pouvoir esquisser une définition formelle du phénomène et explorer quelles conditions permettent son apparition.

An Introduction to Hyperelliptic Curve Arithmetic

Renate Scheidler, University of Calgary, Canada
mercredi 6 mai 2015

Elliptic and low genus hyperelliptic curves represent a very suitable setting for public key cryptography, due to their efficient arithmetic and excellent security properties. Elliptic curves support simpler and faster arithmetic than their hyperelliptic counterparts, but this is offset by the fact genus 2 hyperelliptic curves achieve the same level of security with a base field that is half the size of that used for elliptic curves. Our main protagonist in this primer on hyperelliptic curves will be the Jacobian which is hyperelliptic analogue of the group of points on an elliptic curve. We will introduce a computationally highly useful representation of the elements of the Jacobian, along with their two-step arithmetic consisting of addition and reduction. This talk is targeted at an audience with no or minimal exposure to elliptic curves and no prior knowledge of algebraic curves in general.

Construction de courbes hyperelliptiques à multiplication complexe

Sorina Ionica, Université de Bordeaux 1
mercredi 22 avril 2015

La sécurité des protocoles cryptographiques à base de courbes repose sur la difficulté du problème du logarithme discret. Pour étudier ce problème, on considère le graphe suivant : les noeuds sont des courbes de genre 3 et les arêtes sont des isogénies entres les jacobiennes de ces courbes. Dans ce graphe, qu'on appele graphe d'isogénies, il y a deux types de noeuds : des noeuds hyperelliptiques forts, car le log discret est difficile, et des noeuds non-hyperelliptiques faibles, pour lesquels on dispose d'attaques efficaces. Dans cet exposé, on donne une méthode pour construire des courbes hyperelliptiques de genre 3 pour la cryptographie et on présente une approche pour étudier le log discret en genre 3, via le graphe d'isogénies.

The Collatz conjecture aka the 3n+1 problem

Mario Weitzer, Montanuniversität Leoben, Autriche
mercredi 15 avril 2015

The subject of this talk is the famous Collatz conjecture which is also known as the 3n+1 problem. Let $n$ be any non-negative integer and apply the following transformation: If $n$ is even divide it by 2 and if $n$ is odd multiply it by 3 and add 1. The Collatz conjecture is the unproven observation that no matter what number n one starts with, repeated application of this transformation always eventually yields 1. Though extremely easy to formulate this problem is open since 1937. In this talk we will give an introduction to the problem, present basic facts and interpretations, and deduce and discuss first nontrivial results.

Processus stochastiques d'entropie maximale pour les automates temporisés

Nicolas Basset, Oxford, Royaume-Uni
mercredi 1 avril 2015

D'abord nous rappelons la question généralisée par la suite au cas des automates temporisés : comment générer aléatoirement et de façon quasi uniforme les chemins dans un graphe?
Une solution à cette question est basée sur le processus stochastique d'entropie maximale décrit par Shannon (et connu sous le nom de mesure de Parry en dynamique symbolique et théorie ergodique).
Ensuite nous généralisons les notions de processus stochastique et d'entropie aux automates temporisés.
Nous construisons à l'instar du cas non temporisé un processus stochastique d'entropie maximale pour un automate temporisé.
Ce processus stochastique est défini à l'aide des attributs spectraux (rayon spectral, fonctions propres) de l'opérateur d'Asarin et Degorre, un analogue temporisé de la matrice d'adjacence d'un graphe.
Ce processus permet de faire une génération aléatoire quasi-uniforme des calculs d'un automate temporisé.
Une application est la génération aléatoire (quasi-)uniforme de permutations dont le mot formé par les montées et descentes succesives est dans un langage régulier.

Sémantique des jeux pour la complexité d'ordre supérieur

Hugo Férée, LORIA, Nancy
mercredi 25 mars 2015

Bien que les notions de complexité pour les fonctions d'ordre 1 $(N \to N)$ et 2 $((N \to N) \to N)$ sont bien définies et possèdent de nombreuses caractérisations, ce n'est
pas le cas pour les fonction d'ordre 3 et plus.
Nous proposons d'aborder le problème de la complexité en utilisant la sémantique des jeux, c'est à dire en décrivant un calcul comme un dialogue entre l'objet calculé et ses entrées. Nous définissons alors la taille et la complexité des stratégies pour certains types de jeux dits séquentiels. Cela s'applique en particulier aux jeux associés au langage d'ordre supérieur PCF, ce qui nous permet de définir une notion de complexité sur ces fonctions. La classe de fonctions calculables en temps polynomial qui en découle vérifie en particulier les propriétés minimales que l'on attend d'un analogue de FPTIME pour les fonctions d'ordre supérieur.

Improved Parallel Gaussian Elimination for Gröbner Bases Computations in Finite Fields

Brice Boyer, LIP6, UPMC
mercredi 25 mars 2015

We present a GPLv2 open source C library for linear algebra specialized for eliminating matrices generated during Gröbner Bases computations in algorithms like F4 or F5. We improve on the initial ideas of Faugère and Lachartre (FGL).

Our approach takes even more advantage of the very special structure the corresponding matrices have: quasi unit-triangular sparse matrices with patterns in the data. Optimizing this reduction step is crucial for the overall Gröbner Bases computation.

We first present improved data structures for storing these matrices in binary format, namely by compressing the repeated data in the rows and the column indexes, before gzip-ing the result. We can save up to an order of magnitude in space, allowing us to produce and share a large database of such matrices for benchmarking and testing purpose.

We show efficient blocked data structures for computing the reduced forms with specialized axpy and trsm operations, that take advantage of the patterns in the matrices and their sparsity. For instance, a special multiline storage allows cache friendly storage of sparse rows. We also reduce the number of operations, in a parallel friendly fashion, by changing the order of the operations in the elimination and by not computing the full row echelon form.

Finally, we present experimental results for sequential and parallel computations on NUMA architectures. With our new implementation we get a 5-10\%%%% speed-up for the sequential algorithm depending on the rank. We also get better scaling up until 32 (non hyper-threaded) cores instead of 16: we have speed-ups around $14$ or $16$ for bigger benchmarks. We also save more than twice the amount of memory used during the computation.

The whole alphabet (and then some) agree on a key in one round: making multilinear maps practical

Adeline Langlois, LASEC, EPFL, Lausanne, Suisse
mercredi 18 mars 2015

Multilinear maps have become versatile tools for designing cryptographic schemes since a first approximate realisation candidate was proposed by Garg, Gentry and Halevi (GGH). In this talk, I will introduce the more efficient variant GGHLite. Then I will present the improvements we made to provide an efficient implementation of GGHLite.

While GGHLite represents an asymptotic improvement in parameter sizes compared to GGH, implementing it naively would still not allow to instantiate it for non-trivial parameter sizes. We hence propose a strategy which reduces parameter sizes further and several technical improvements to allow for an efficient implementation. To demonstrate the performance of our results, we also provide an implementation of a non-interactive N-partite Diffie-Hellman key exchange and report on experimental results. For example, due to our improvements we were able to compute a multilinear map of multilinearity level $\kappa$ at security level $\lambda=80$.

This is a joint work with M. Albrecht, C. Cocis and F. Laguillaumie.

Sparse Gröbner Bases: the Unmixed Case

Pierre-Jean Spaenlehauer, LORIA, Nancy
mercredi 4 mars 2015

We present an analog of Gröbner bases for semigroup algebras, and we propose variants of the algorithms F5 and FGLM to compute them. These algorithms provide tools to compute the solutions of sparse systems of equations when all the polynomials share the same monomial support (unmixed case). When these monomials correspond to the points with integer coordinates in a normal lattice polytope and under regularity assumptions, we prove complexity bounds which depend on the combinatorial properties of this polytope. Our prototype "proof-of-concept" implementation shows speed-ups (more than 100 for some examples) compared to classical Gröbner bases software for some families of sparse systems.

Joint work with Jean-Charles Faugère and Jules Svartz.

Complexity and decomposition to the set of factors (joint work with J. Cassaigne, A. Frid and L. Q. Zamboni)

Svetlana Puzynina, Post-doc, Lyon et Sobolev institute, Novosibirsk
mercredi 25 février 2015

In this talk we introduce a new hierarchy of classes of languages and infinite words and its connection with complexity classes. Namely, we say that a language belongs to the class $L_k$ if each word in it can be introduced as a concatenation of $k$ words from a language $S$, such that the number of words of length $n$ in $S$ is bounded by a constant. The class of infinite words whose set of factors is in $L_k$ is denoted by $W_k$. In this talk we focus on the relations between the classes $W_k$ and the subword complexity of infinite words, which is as usual defined as the number of factors of the word of length n. In particular, we prove that the class $W_2$ coincides with the class of infinite words of linear complexity. On the other hand, although the class $W_k$ is included in the class of words of complexity $O(n^{k-1})$, this inclusion is strict for $k > 2$.

Calcul des racines de polynômes à coefficients dans un corps fini

Bruno Grenet, ECO, LIRMM
mercredi 11 février 2015

Trouver les racines d'un polynôme à coefficients dans un corps fini est un problème fondamental du calcul formel, qui sert de brique de base à de nombreux algorithmes du domaine (factorisation, interpolation, ...) et qui a des applications en cryptographie ou pour les codes correcteurs par exemple. D'autre part, il s'agit de l'un des rares problèmes que l'on sait résoudre en temps polynomial probabiliste mais pas en temps polynomial déterministe.

Dans mon exposé, je présenterai les grandes familles d'algorithmes connues pour ce problème, avant de me focaliser sur une nouvelle approche que nous avons récemment proposée. Cette approche est basée sur la transformée de Graeffe, un outil issu de l'analyse numérique, et est particulièrement adaptée à certains corps finis. Cela nous permet d'obtenir de nouvelles bornes de complexité déterministe pour ce problème, ainsi qu'une implantation dans le logiciel libre Mathemagix, plus rapide que les logiciels existants.

Travail réalisé en commun avec Grégoire Lecerf et Joris van der Hoeven.

Bisimulation games in modal logic

Valentin Shehtman, IITP, Moscow, Russia
mercredi 4 février 2015

Bisimulation games are modal logic analogues of classical Ehrenfeucht-Fraisse games. They are closely related to bisimulations of Kripke models (transition systems) and can be used for study of expressive power of different logical languages, for proofs of theorems on completeness and finite model property for logics and theories. In this talk we show how bisimulation games can simplify proofs of local finiteness of nonclassical propositional logics (or the corresponding varieties of algebras); a logic $L$ is called locally finite if for any finite $n$ there are finitely many formulas in $n$ variables up to equivalence in $L$. Well-known results of this kind are the theorems by Segerberg - Maksimova and Kuznetsov - Komori on local finiteness of modal and intermediate logics of finite depth. In this talk we propose various generalizations of these theorems and discuss further lines of research.

Rank-metric codes: duality and MacWilliams identities

Alberto Ravagnani, Université de Neuchâtel (Suisse)
mercredi 21 janvier 2015

Delsarte rank-metric codes are linear spaces of matrices of bounded rank and entries in a given finite field. A different definition of rank-metric code was proposed later by Gabidulin. Both Delsarte and Gabidulin codes have been recently applied in linear network coding for error correction in multicast communications. Gabidulin codes also have applications in (information theoretic) security. In this talk we compare the duality theories of Delsarte and Gabidulin codes, proving that the former can be regarded as a generalization of the latter. We also show how one can derive MacWilliams identities (explicit relations between a code and its dual) for Delsarte codes using a double-counting argument. The original proof by Delsarte is based on the theory of association schemes and regular semilattices. The result also applies to Gabidulin codes. As an application of the results to a classical problem in enumerative combinatorics, we provide a recursive formula for the number of rectangular matrices of given size, rank and h-trace over a finite field.

calcul de polynômes modulaires en dimension 1 et 2

Enea Milio, IMB, Université de Bordeaux 1
mercredi 17 décembre 2014

Un polynôme modulaire de niveau $p$ en dimension 1 est un polynôme qui paramétrise les classes d'isomorphisme de courbes elliptiques muni d'une isogénie de degré $p^2$. Ces polynômes ont des applications cryptographiques, notamment parce qu'ils permettent de construire des courbes elliptiques avec un nombre de points fixés et parce qu'ils permettent de calculer des graphes d'isogénies. Dans un premier temps, nous allons rappeler les différentes notions intervenant dans la définition de ces polynômes et puis nous nous proposons de décrire un algorithme de type évaluation/interpolation pour les calculer. Dans un second temps, nous donnerons une généralisation de cette notion en dimension 2 et décrirons certains résultats que nous avons obtenus.

Autour de la longueur palindromique

Gwenaël Richomme, Escape, LIRMM
mercredi 10 décembre 2014

La longueur palindromique d'un mot fini est la plus petite longueur d'une factorisation en palindromes de ce mot.
A.E. Frid, S. Puzynina et L.Q. Zamboni ont conjecturé que tout mot infini dont la longueur palindromique des facteurs est borné est un mot ultimement périodique. Nous ferons le tour de différents aspects de cette conjecture et montrerons la conjecture dans un cas particulier où la longueur palindromique est remplacée par une version calculée de manière gloutonne.

On discrete compressed sensing problem

Grigori Kabatianski, Institute for Information Transmission Problems, Moscow
mercredi 26 novembre 2014

Compressed sensing (CS) problem can be formulated in the following way [1,2] - to find a real vector $x \in \mathbf{R}^n$ from minimal number of linear measurements, i.e. from equation $s=Hx^T+e$, where vector $x$ is $t$-sparse, i.e. $||x||_0 \leq T$, and vector e errors of measurements is enough small. Discrete version of CS problem replaces restriction $||x||_2$ is enough small on restriction that vector $e$ is $L$-sparse. A particular case of this problem is for instance famous Erdős-Ulam problem "search with a lie" or "problem of 20 questions" [3]. In the talk we give a solution of discrete CS problem in the same way as RS-codes provide the solution of the main coding theory problem for the cases when code length is at most field's cardinality. It is worth to mention that RS codes aren't good for discrete CS problem as they demand the redundancy at least $4TL$, see modern analysis of Prony's algorithm [5]. We have found an optimal solution which uses only $2(T+L)$ redundancy. In the conclusion it will be shown how these results can be applied for a restricted version of ordinary CS problem. The talk is based on joint research of the speaker with Serge Vladuts, Cedric Tavernier and Valery Lomakov.

References

[1] D.L. Donoho, "Compressed sensing", IEEE Transactions on Information Theory, v. 52 (4), pp. 1289-1306, 2006.

[2] E.J. Candes, T. Tao, "Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? ", IEEE Transactions on Information Theory, v. 52 (12), pp. 5406 - 5425, 2006.

[3] A. Pelc, "Solution of Ulam's Problem on Searching with a Lie", Journal of Comb.Theory, Ser.A 44, pp. 129-140, 1987.

[4] R.Prony, "Essai experimantal et analytique sur les lois del Dilatabilite de fluides" J. de Ecole Polytechnique 1, pp. 24-76, 1795.

[5] M.T. Comer, E.L. Kaltofen, C.Pernet, "Sparse Polynomial Interpolationb and Berlekamp-Massey Algorithms That Correct Outlier Errors in Input Values", in Proc. ISSAC 2011, pp. 138-145.

Randomizing Scalar Multiplication Using Exact Covering Systems of Congruences

Laurent Imbert, LIRMM, CNRS
mercredi 19 novembre 2014

Since the pioneer works of Kocher, side channel attacks have been taken very seriously by researchers and designers of cryptographic algorithms. By attacking the implementations instead of the underlaying mathematical problems, these attacks have proved very powerful against both private and public key algorithms.

In this talk, I will briefly sketch the bases of these attacks and present some common countermeasures. I will then present a novel family of scalar multiplication algorithms (intensely used in elliptic curve cryptography) based on covering systems of congruences. At the lower level, the algorithms are very regular; they consist of a repeating sequence of identical instructions. At an higher level however, they are uniformly randomized and thus behave very irregularly, making simple, differential and horizontal attacks much more difficult.

This is a joint work with Eleonora Guerrini (LIRMM) and Theo Winterhalter (ÉNS Cachan).

Degrés Turing des sous-shifts minimaux

Pascal Vannier, LACL (Paris-Est Créteil)
mercredi 12 novembre 2014

Les sous-shifts sont des sous-ensembles de $\mathbf{Z}^d$ fermés invariants par translation, ou de manière équivalente, définissables par une famille de motifs interdits. Un sous-shift est minimal si il ne contient proprement aucun autre sous-shift, tous les sous-shifts contiennent donc un sous-shift minimal. Un sous-shift minimal a la particularité que tous ses éléments ont les mêmes motifs et que pour tout motif de taille n apparaissant dans une configuration, il existe un $N$ tel que dans toute fenêtre de taille $N$ de n'importe quelle configuration ce motif apparaisse. Nous nous intéressons ici à la structure calculatoire de ces sous-shifts et en particulier à la structure de leurs ensembles de degrés Turing.

Entropy compression and shuffle squares

Pascal Ochem, AlGCo, LIRMM
mercredi 22 octobre 2014

Daniel Gonçalves, Mickael Montassier et Alexandre Pinlou ont développé l'utilisation de la compression d'entropie pour la coloration de graphes. On s'intéresse à cette méthode pour des problèmes d'évitabilité, i.e., prouver qu'il existe un mot infini évitant certains facteurs interdits sur un alphabet aussi petit que possible.

On montre que les shuffle squares sont évitables avec un alphabet de taille 7. Aussi, les caterpillars admettent une coloration non-répétitive par listes de taille 5. Enfin, on conjecture que cette méthode est toujours au moins aussi puissante qu'une méthode concurrente utilisant une série génératrice.

Extension d'une notion de quasipériodicité en deux dimensions

Guilhem Gamard, Escape, LIRMM
mercredi 8 octobre 2014

Un mot est quasipériodique s'il peut être recouvert par des occurrences d'un autre mot, appelé quasipériode. Par exemple, ababaabababa admet aba et ababa comme quasipériodes. En outre, un mot est dit quasipériodique multi-échelles s'il a une infinité de quasipériodes.

En premier lieu, nous rappellerons comment, dans le contexte des mots infinis à droite, ces notions de quasipériodicité et quasipériodicité multi-échelles se relient à d'autres concepts classiques de combinatoire des mots, tels que l'uniforme récurrence ou la complexité en facteurs.

Dans un second temps, nous généraliserons ces notions aux mots infinis bidimensionnels. Nous étudierons alors les résultats de la première partie qui passent à la dimension deux. En outre, nous expliquerons pourquoi certains résultats ne se généralisent pas.

(Gaussian) Shift Radix Systems - Introduction and Results

Mario Weitzer, Montanuniversität Leoben, Autriche
mercredi 3 septembre 2014

Shift Radix Systems (SRS) turned out to be the missing link between two generalizations of positional notation systems - Beta-Expansions and Canonical Number Systems (CNS) - which have been studied extensively during the last decade, but still leave behind many open questions and unsolved problems. In distinction from positional notation systems, where all integers greater than 1 can serve as a basis, in the general cases only the elements of complicated sets satisfy certain natural finiteness conditions. An introduction to SRS is given and results in relation to the characterization of such sets are being presented. These results settle two previously open topological questions. For a specific set it is shown that it is disconnected and so is its complement. For another set known as Pethö's Loudspeaker a conjecture on its shape is formulated and proved to some extent.